Dr. AIX's POSTS

Production Paths: Microphone Multiplication Part II

Sorry, I skipped over a day in getting back to the idea of microphones, recording choices and the differences between a classical/jazz session and a session of similar instrumentation that will be used as a move soundtrack. The tools may be the similar but the techniques are not. It’s a matter of mixing flexibility and target audience.

A classical/jazz record is intended to bring the listener back into the concert hall and let them relive the experience in their own home, car of headphones. The choices that are made by the production team are tailored to best meet that objective. On the other hand, a motion picture soundtrack plays a subservient role to the visuals on the screen and the story being told by the director. The notion of recreating a concert hall gives way to the production requirements of the movie business.

Then there’s the question of where the recordings will be heard. I mentioned the home audio setup, perhaps your car or a personal listening experience for a stand-alone recording but a movie soundtrack is reproduced in a theater. That means that the theater chain has installed standardized surround sound playback equipment according to the THX standard. It also means that the final dubs and layback of the mixed dialog, effects and music has been referenced against the Dolby or some other standard for levels and format.

So how might a scoring engineer handle a session? What would be different than the session technique that I described before with minimal miking techniques and limited numbers of tracks?

First, the studio itself is not a live performance venue…it’s a studio or what’s referred to as a scoring stage. Here in Southern California, there are quite a few really great scoring stages like the Sony Culver City studio, the Burbank stages owned and operated by Warner Brothers or the stages at FOX in Century City. But there are lots more second tier, and less expensive, studios around town that filmmakers use to record movie and television soundtracks.

I’ve spent time in a scoring studio assisting the composer or recording a score over the years. I even got to spend a full week in the tech building at Lucas Ranch up in Northern California. They have a really amazing facility that is available for movie scoring AND is often used by record labels for classical and jazz projects.

The engineers that specialize in movie scores use lots of microphones. There are dedicated microphones for each soloist or featured players, there are microphones for the various sections of the orchestra (like the first and second violins, violas, celli and basses), there are microphones placed close to the percussion section and of course, lots of mikes for the non-orchestral instruments such as guitars, saxophones and choirs.

Film composers also make use of a lot of samplers and synthesizers in their work. The massive racks of synths are usually submixed by the player and then sent “direct” via DI Box to the console. They rarely are routed through speakers and then back into microphones.

Then there are the mikes that are placed in more traditional locations. There are stereo pairs placed high and behind the ensemble to capture the general orchestral blend of the acoustic instruments. The layout or physical distribution of the ensemble is modeled on the traditional symphony layout familiar to patrons of concerts.

The sessions that I’ve attended have all of these microphones placed around the musicians but they are almost exclusively single mikes…not stereo pairs with the exception of the pair above and behind the conductor. The mikes are fed to a console, tweaked and then recorded on a big Pro Tools rig using 48 kHz/24-bit PCM. There are no scoring stages that I’m aware of that use anything but PCM audio.

These types of sessions are phenomenally expensive and there are very specific rules laid out by the union as to how many minutes of audio you can record per hour and how many breaks are allowed. The engineer cannot risk having any technical difficulties arise during the sessions. They don’t experiment with high sample rates or stereo miking techniques because there is a strict procedure in place. Can you imagine walking in to a dubbing session with your music at one sample rate and the rest of the tracks at another? It’s not going to happen.

Dr. AIX

Mark Waldrep, aka Dr. AIX, has been producing and engineering music for over 40 years. He learned electronics as a teenager from his HAM radio father while learning to play the guitar. Mark received the first doctorate in music composition from UCLA in 1986 for a "binaural" electronic music composition. Other advanced degrees include an MS in computer science, an MFA/MA in music, BM in music and a BA in art. As an engineer and producer, Mark has worked on projects for the Rolling Stones, 311, Tool, KISS, Blink 182, Blues Traveler, Britney Spears, the San Francisco Symphony, The Dover Quartet, Willie Nelson, Paul Williams, The Allman Brothers, Bad Company and many more. Dr. Waldrep has been an innovator when it comes to multimedia and music. He created the first enhanced CDs in the 90s, the first DVD-Videos released in the U.S., the first web-connected DVD, the first DVD-Audio title, the first music Blu-ray disc and the first 3D Music Album. Additionally, he launched the first High Definition Music Download site in 2007 called iTrax.com. A frequency speaker at audio events, author of numerous articles, Dr. Waldrep is currently writing a book on the production and reproduction of high-end music called, "High-End Audio: A Practical Guide to Production and Playback". The book should be completed in the fall of 2013.

Leave a Reply

Your email address will not be published. Required fields are marked *