- new
- past
- show
- ask
- show
- jobs
- submit
I suppose this is the innovative part. They're not simulating just the string, but also the fluid it's immersed in, which is a computationally hard problem.
I made a vibrating string simulator in college for our Numerical Methods course and for quite a while I couldn't understand why it sounded so bad.
Turns out rounding errors in floating point operations can propagate to a point where they produce this distinct, "metallic" sound.
They're incredibly small, but if your system of differential equations is large enough, they'll become noticeable. Switching to an algorithm with better numerical stability would probably mitigate this issue, but I didn't get that far with my project.
Reminds me of a Karplus-Strong synthesis implementation that produced a gorgeous guitar/mandolin sound, but only for delay durations that weren't simple ratios with the given sample rate. The simple-ratio durations would end up sounding like crude, attenuated periods of noise-- metallic sounds like you'd expect from a pitch produced in a KSS demo. Everything else had some kind of subtle interpolation error that ended up shaping the noise just enough to make it sound like a million bucks.
The problem with most KSS is that the filter used will typically saturate the timbre. So rather than hearing a guitar string, you're hearing a guitar-adjacent interpolation scheme whose prominence makes you wonder just how un-guitarlike the original unfiltered sound must have been.
Julius Smith wrote pretty comprehensive textbook on the subject of building physical models of musical instruments, available online. Here, for example, is a chapter on modeling bowed string sounds: https://ccrma.stanford.edu/~jos/pasp/Bowed_Strings.html
From the article:
> As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.
And in consumer products for 20+. Pianoteq [0], which is awesome, was first released in 2006.
Also Audio Modeling has been in the business of creating physically modeled virtual instruments, including the violin (under the SWAM series), for a while now as well. You can do pretty fun things like map a USB breath controller to bow pressure, etc.
It's much more difficult to use, though - you have to control lots of aspects of the simulation (using automation in DAW or MIDI controllers) to make it sound actually realistic.
OK I guess it seems like this is more of a tool for luthiers than for composers or music producers.
I currently use a raspberry pi with Pianoteq as sound output for my digital piano. It got a reluctant stamp of approval from my pianist son, although of course he prefers the physical response of even a poor acoustic piano.
The combination of pianoteq and a sample based piano is pretty nice too, though tough to do on a Pi.
Good speakers improve the experience because you get your room resonance etc.
The coolest thing - you can change temperament. So if you are playing music from before equal temperament, you can hear what different keys used to sound like! Very interesting especially with Bach.
I agree with your son, there is nothing like a real piano. There are interesting attempts at combining the digital and mechanical with soundboard transducers from Kawai and Yamaha, I haven't used them but I would like to.
90s physical modelling was a very simplified modular kind of modelling. Instead of analogue oscillators and filters you had "string" models, "pipe" models, various resonators, and so on.
The models were interesting, but still quite crude and basic.
This project is the most physical kind of physical modelling. It's an unsimplified brute-force model of the entire instrument body and string system, in full.
It doesn't try to "model a resonator", it models blocks of wood with various holes, and calculates how they distort and radiate as sound passes through them.
It's ridiculously expensive computationally, but it's also the only way to get all of the nuances of the sound.
I expect they're already working on a stick-slip model for bowing.
Theoretically you could use the same technique to model a piano or guitar, and you would get something indistinguishable from a real instrument.
You'd likely need a supercomputer to run the model in anything approaching real time.
But the advantage is that once you've got it you can do insane things like replace the strings with wood instead of metal, or use different metals, or "build" nonphysical pianos that are fifty feet long and have linear overtones all the way down to the bass.
I can tell the difference between Pianoteq and a real piano, but I can't in general tell the difference between Pianoteq and a recording of a piano. Maybe there's some insane level of hi-fi gear which would let me, idk? But in general, when it's good enough for Steinway, Petrof and my conservatory student son to give their stamp of approval, I think it's good enough for me as well :) quite a few of those insane things you mention you can already do with pianoteq's physical model (i.e. emulating a 20m grand), and I suspect they keep a few knobs to themselves to sell virtual instruments.
That's a great way to put it. There's no way to fully reproduce that live sound, but compared to anything played through speakers, Pianoteq is indistinguishable from a real piano.
Out of the box it sounds a little too perfect, but just setting the Condition to the midway point (1.0) fixes that.
“If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”
Ouch: this is completely inaccurate. Physical modeling has its roots in the 80s and Stefan Bilbao has been doing FDM based methods for over 20 years. I think he discusses fem in numerical sound sysnthesis
Also, aschkually, a violin is on the "easier" end of making it sound realistic. It's one of the "tutorial" models you go through when you start learning about this (resonators + reverb get you 80% there). Much harder to do any plucking sound (guitar, piano), and much much harder to model percussions accurately (cymbals, drums) and in such a way that the sound doesn't come out dry and very evidently synthetic.
Source: I was very invested into this in the 2000s, although as a hobby, not professionally.
Conical and cylindrical bores definitely differ but I don't see why they'd be different specifically with respect to the lip interaction, can you say more about that part?
My father is a luthier, and while he definitely needs to wait until the instrument is finished to hear the full sound, he also uses multiple techniques on parts of an unfinished violin to hear *some* sound. For example, he knocks on the top or back plate and listens to the sound it makes.
I don’t know how much of it is just voodoo, but he’s been doing it for 50 years, so I’m sure he noticed some correlation to the final sound by now. :) I'll have to ask him.
https://github.com/Qzping/ELGAR
It's just fun to see solutions to problems you didn't even know to exist.
Looking it up just now, it turns out that, "Modern physics research shows that the f-shape allows the instrument to push much more air than a traditional round hole, resulting in greater acoustic power and projection."
Just wanted to share in case someone else had that same bit of false knowledge in their head.
"Show HN: Anyma V, a hybrid physical modelling virtual instrument" 01-aug-2024 https://news.ycombinator.com/item?id=41132104 29 comments
"Show HN: I built a synthesizer based on 3D physics" 02-may-2025 https://news.ycombinator.com/item?id=43873074 123 comments
My main instrument was the saxophone and whenever I hear AI/artificial saxophone somewhere I can notice it right away, but I'm very curious if I've ever been a victim of the toupee fallacy.
I wonder whether there's a good test/game where you have to guess whether a given sound of a musical instrument is real or not.
> Violin bowing, the researchers say, is a much more complicated interaction to model.
That's not how building a violin works, it is not a physics problem, it is material matter, it is a taste matter, it is a matter of adapting to material what you have, ...
I have no doubt there's been analytical/semi-analytical models around for decades. I mean a program that can take an arbitrary geometry or class thereof with specific materials and simulate the high frequency vibrations and model interactions with the body with high fidelity (not through ad-hoc models) is probably still out of scope of real time simulation.
My point is really that there's often families of models that deal with one thing, from semi-analytical first coded in Fortran in the 80s that can run in milliseconds but is only valid in certain configurations with a low degree of accuracy, to "first principles" simulations that may well require a supercomputer to produce results to a useful degree of accuracy (and not in real time). So, just because you see someone claim they can "simulate X", and then another makes the same claim 40 years later, that doesn't mean they're doing the same thing.
For instance, aeronautics has XFOIL. It's a semi-analytical model first devised in the 80s that computes aeronautics coefficients for a certain class of airfoils (NACA). My understanding is it's a very clever, and industrially significant, piece of code, but ultimately it works in a narrow regime with some heavy simplifications. You can now get results from this in real time on a webpage. A proper CFD calculation to a NACA wing will take in the order of minutes to hours on a workstation (depending on requested precision and settings, e.g. speed of air), and while closer to first principles, it's still using physical simplifications (RANS). So yeah, although nominally people have been "simulating airfoils" for 40 years, the techniques have refined considerably, and will continue to do so (practical LES and, someday, DNS). It might be another century that people are still "simulating airfoils" in ever more accurate (nailing down within the constraints), high fidelity (lifting constraints) and generic ways.
Back to instruments, this is a difficult coupled problem, in fairly high frequencies (high frequencies = more expensive), with possible fluid-structure interactions, not to mention the geometries are fairly complex (to even get a workable mesh to begin with). My uneducated guess is we're still at either semi-analytical, or at the "considerably simplified first principles" stage for this type of problems. Just like DNS, I'm sure you could "just resolve the scales and run it through a simulation with a really tiny time step", and this is liable to be similarly expensive as DNS (million dollar single simulation). Additionally, they have to deal with the human ear, which is perhaps more unforgiving than an error plot on drag or lift. So I wouldn't dismiss news of instrument simulation as stale just because someone made something that produced similar artifacts in the past, as the methods will continue to evolve considerably.
Even in this case, they're choosing the easy path (plucked, pizzicato), but the human/instrument interface is still audibly oversimplified while the resonant body has an unnecessary amount of "realism". The sound of pizzicato has a distinct character because the player's finger/skin slides a bit on the string as they're plucking, among other factors, which sounds like it's missing here. This can be tricky to implement because it's not necessarily a one-way impulse. The string is already vibrating and affects the finger, hence "interface".
This applies 10x more with bowed strings.
If your model doesn't sound like someone's strangling a cat then it's probably not realistic.
the real sound comes when its played with other instruments in concert. it doesnt need years of practice it needs patience the right setting and an extra joint on ur pinky :p
It absolutely requires years of practice to play the violin at an expert level. This is well documented.
> the real sound comes when its played with other instruments in concert
So in the violin repertoire, the Bach sonatas and partitas for unaccompanied violin, the Ysaÿe sonatas and the Paganini caprices are what, not real?
So I don't know if your criticism makes much sense.
you mention a few details theres so many more if you think about it..the human-instrument interaction has all sorts of imperfections.
tension in shoulders can make u bend the neck a bit. tension in fingers too much might pull out of tune. pushing not 100% straight along the bow might shift it sideways a bit changing how it crosses strings. Then ofc at what position is the bow on the strings (closer/futher from bridge).
humans are not perfect machines but in those imperfection lies the beauty. A perfectly played instrument is played by a human and has this 'humanization' across all areas where human and instrument and music itself interact imho.
if you produce music digitally this instantly will show, because all your instruments will sound flat and boring if you dont humanize.