More
News

Back from the grave

Mass-spring Networks

A wholly different approach to synthesis is afforded by the application of physical models [3,  4] of musical components, such as strings, bars, plates, membranes, acoustic tubes, enclosed air cavities and rooms, and interactions with excitation mechanisms such as reeds, or bows.

Physical modelling synthesis, after a (long!) period of incubation, emerged in the 1980s, and has since dominated the sound synthesis research environment—and, as mentioned previously, directly addresses issues of control and sound quality. Physical modelling sound output has a natural character, and at its best, exhibits all the subtlety of acoustically-produced sound, with immense potential to go beyond what is possible with existing instruments—the musician is limited only by imagination, and of course, computational resources (read on!). The control aspect is also very neatly dealt with: instruments are defined by geometrical and material parameters, few in number, and are played by sending in physically meaningful signals such as striking locations and forces, or blowing pressures, etc. This is not to say that the user control of a physical model is easy—but it can be learned, much in the same way that one learns to play an acoustic instrument. In contrast, learning how to set the amplitudes, frequencies and phases of a thousand oscillators in order to produce a desired sound is probably beyond the capability of even the most astute and dedicated musician!

Digital waveguide synthesis

The very earliest instances of physical modeling synthesis date back to the 1960s. Kelly and Lochbaum [5] developed a model of the vocal tract based on concatenated acoustic tubes, in order to perform vocal synthesis, in 1962. Ruiz and later Hiller and Ruiz [6] employed a finite difference model of a vibrating string to generate plucked and struck string tones as far back as 1969. In the late 1970s and early 1980s, the first complete environment, CORDIS, based on networks of masses and springs was developed, primarily by Cadoz and his associates [7], and continues to develop. All of these are essentially direct numerical solvers for differential equations. The video here shows a vibrating collection of masses and springs, where sound output is drawn from the motion of one of the constituent masses.

In the 1980s various distinct frameworks emerged. Among the most important have been digital waveguides, and modal synthesis. With the advent of greater computer power came the possibility to perform synthesis for relatively complex systems in real time or near real time.

Modal synthesis

Digital waveguides [8], developed at CCRMA at Stanford University, make use of simple and efficient delay-line structures to model wave propagation in objects such as strings and acoustic tubes. Waveguides were subsequently patented and commercialized by the Yamaha corporation, and constitute the most successful application to date of physical modelling synthesis methods. The video here shows the decomposition of the vibration of a string into traveling wave components—and the resulting delay line structures.

Modal methods [9] rely on the decomposition of the dynamics of a system into modes, each of which oscillates at a given natural frequency, and have been researched extensively at IRCAM in Paris, giving rise to the Modalys software environment. The video here shows the decomposition of the vibration of a string into modes, which may then be added together in order to reconstruct the entire motion of the string.

A great reference for physical modeling synthesis (and many other topics in digital audio!) is Julius Smith’s Global Index