“Soundscape Auralization” Combines the Art of Recording with Science of Sound

Matthew Azevedo – mazevedo@acentech.com
Acentech
33 Moulton St.
Cambridge, MA 02138

Popular version of paper 1aAA5 presented at the 2014 167th ASA Meeting in Providence, Rhode Island.


Soundscapes define their environments as strongly as landscapes. When we imagine New York City, the honking of taxis and shouts of street vendors are as much a part of the place as the skyscrapers. Who can think of the ocean without the cries of seagulls and the crashing of waves, or a summer night in the country without wind in the leaves and the chirping of insects? People have been recording soundscapes for decades, but recent advances in auralization (the creation of simulated acoustical spaces) have made computer simulated, but acoustically accurate, soundscapes possible.

While modern soundscape auralizations are accurate in terms of how sound travels and reflects in their simulated environment, they frequently lack the small details that convince listeners that what they are hearing is “real.” Far from the acoustic research labs where the technology of auralization has been developed, people have been creating realistic soundscapes for years in recording studios and Hollywood soundstages. Recording engineers are experts at carefully crafting the details that give sounds a sense of reality. Even when the sounds we hear are impossible, like the whoosh of a starship as it travels through the soundless vacuum of space, our minds frequently believe in them.

The next frontier in soundscape auralization is combining the parametric accuracy of computational acoustical models with the veracity of a well-engineered recording. This requires an approach that puts the expectations of listeners as the starting point of an auralization. While many auralizations focus on the “important” sounds, like musicians on a stage or a speaker at a podium, we now see that a more holistic approach is needed. The “unimportant” parts of a soundscape, voices in the distance, wind, traffic, dogs barking, turn out to be as critical to a sense of reality as foreground sounds. People spend every moment surrounded by incidental sounds, and when they are missing so too is the sense of reality.

Listen to the auralization of John Donne’s Paul’s Cross Sermon here.   

The quality and character of the source material is also a key component of an immersive soundscape. In some cases, researchers use poorly engineered recordings and rely on the simulated environment to breathe life into them. In fact, the quality of the initial recording is as important to a soundscape as it is to a hit record. The lack of suitable source recordings also leads to the use of almost-right material in many auralizations. Successful soundscapes almost always necessitate custom recorded material to instill an auralization with its own life.

Figure 1: The Paul’s Cross Project, led by Dr. John Wall at NC State University, created an immersive auralization of the soundscape of John Donne preaching outside of St. Paul’s Cathedral in 17th century London. The auralization, which was created by engineers at Acentech, features not only an actor performing the sermon, but also the sounds of a crowd of thousands, dogs, birds, horses, wind, and the churchbells. You can learn more about the Paul’s Cross Project at vpcp.chass.ncsu.edu.
Computers have brought an amazing level of detail and accuracy to simulated soundscapes created by acousticians. However, these simulations still require the artistic touch of a recording engineer to truly be believable. By combining the skills of acousticians and recording engineers, it is possible to produce immersive soundscapes that combine accuracy and veracity in a single deeply engaging experience.