Dynamic Room Modeling
There’s a revolution happening in how digital audio processing is being used in audio production. The reason is the dramatic increase in processing power and the sophisticated applications that are running on desktop and portable devices. And there is no reason to believe that ever more powerful…and cost effective…processors and innovative software won’t continue to offer recording professionals ever more tools.
I feel obligated to point out that all of the new plug-ins, powerful DSP tools and innovative software operate on PCM encoded audio. None of these new processes are available in a 1-bit or DSD environment. Even if the ultimate delivery format is an SACD or DSD download, chances are the majority of the production flow will have been done in the analog or PCM digital domain. Those advocates who believe that DSD will somehow impact a significant percentage of all audio production being done regardless of market segment are fooling themselves. Even now, as I’ve pointed out in the past, only 16% of the DSD audio you can purchase are native DSD recordings. The past, present and future of music production is going to be PCM-based…period.
Modeling an acoustic space using digital signal processing has been around for well over 20 years. Once we switched from analog to digital reverberation and delay lines back in the 1908s, the doors were thrown open. The first high quality room modeling applications captured the acoustic parameters of a specific (and usually wonderful concert hall). Engineers would bring expensive calibration microphones and test equipment into the targeted space and project a number of test tones to determine the reflection characteristics of that space. This would include initial reflections times, frequency dependent filtering, decay times and other timbral vs. time components.
The results of the exhaustive testing was encapsulated into a “convolution” algorithm that would be programmed into a reverberation preset in an outboard digital effects processor. Any signals that an engineer would route to the convolution program would be output with the “sound” of the original acoustic space. The boxes were expensive ($10,000 and up) but it was a whole lot better than traveling to actual halls.
More recently the notion of modeling an acoustic space has taken on a new and potentially very exciting new direction. By adding dynamic or real time modification of the convolution algorithms and other equipment modelling associated with a prized studio and space, engineers working in less than ideal environments can enjoy the benefits of a rich acoustic studio or performance space and its rare and valuable collection of microphones.
Figure 1 – The Ocean Way / Universal Audio Dynamic Room Modelling Plugin screen.
This is the opening blurb from the Universal Audio web page that describes the “world’s first dynamic room modeling plugin”:
“Imagine having access to one of the world’s premier recording studios, with full use of its vintage microphones, working alongside the man who has spent decades recording in its rooms, shaping your sounds in real time…with stunning results.
Developed by Universal Audio and Allen Sides, the Ocean Way Studios plug-in rewrites the book on what’s possible with acoustic space emulation. By combining elements of room, microphone, and source modeling, Ocean Way Studios moves far beyond standard impulse response players and reverbs — giving you an authentic replication of one of the world’s most famous recording studios.”
Now everyone that wants to have that Ocean Way Recording Studio sound can purchase a plugin for their Pro Tools rig and start making hit records. Well, at least they’ll have some new sound modification tools.
One final note regarding the plugins from Universal Audio. I wrote some time ago about the “Massive Passive” EQ Plugin and the fact that it claimed to be capable of 96 kHz/24-bit operation but was in fact, downconverting to 48 kHz. I was unable to determine from the specifications document whether this plugin runs at 96 kHz or 48 kHz. My guess would be the former.
Tomorrow, I’ll talk about the modelling of classic microphones and their use in a virtual space.
I was hoping you’d write something about this – thanks! I think I get it that room modeling can make the recording sound like it was recorded in the modeled space, but is there anything needed on the consumer end to duplicate the full effect? For example, will the home listener get the best duplication of the intended sound with typical recommended room treatment and use of DRC like Acourate or Audiolens with a target B&K house curve, or does the home listener need a target curve (and perhaps other parameters) that emulates the modeled recording space? If the latter maximizes the ability to hear the intended sound, then perhaps music recorded in a modeled way need to have metadata shipped with the music to allow the home user to set up something tailored to what the artist intended.
You hit the nail right on the head. My scheme is to introduce the concept that is being used in the production of new recordings…and then we’ll look at what people can do at home to reproduce the sounds as intended. Stay tuned.
While perhaps not directly related to room modeling, but in line with high fidelity production and reproduction, it would be great if you could address myths and realities of recording/mixing/mastering high fidelity music for playback on ipods, in cars and other less-than-ideal environments versus a good home system.
While I like all the advancements and techniques you advocate and seem to find in practice, I wonder if that’s what engineers really care about to be commercially successful in a world where much listening is in less-than-ideal situations. Do engineers need to make a fundamental choice to make music that sounds best in a good system versus a mobile environment, or do the same best practices make music sound the best in both? The thread that made me think of this most recently is http://www.hydrogenaudio.org/forums/index.php?showtopic=104328.
What seems to be going on could be more than just Loudness War collateral damage. I may be that engineers produce songs that will sound best for ipods while sacrificing sound quality on decent systems. But is such a tradeoff necessary? Can music be produced that will sound attractive in both environments?
Any insights you can share, perhaps in a future article, would be appreciated.
This is great idea for a post. In short, yes…there are often several different mixes done for each of the market segments…including headphones or MP3 players.