Dr. AIX's POSTS — 23 January 2015

By

The Audiophiliac, aka Steve Guttenberg, interviewed mastering engineer Dave McNair and basically asked, “what’s the biggest factor in determining the overall sound of a recording?” over at cnet (you can read the article here). The answer to the question depends on a lot of factors. As a former (can I say recovering mastering engineer of 16 years) mastering engineer and currently a producer and engineer of new high-resolution audio recordings, I believe I have something to contribute to this topic. The original recording sessions are the most important stage in the recording process in establishing high fidelity…if that matters.

And it matters to me because I’ve been trying for 15 years to produce and release recordings that avoid the traditional recording pitfalls (as I see them…over processing, heavy use of compression etc). The same considerations may not be important to other recording engineers. Their job is to produce a recording that matches the sound of the current commercial hits. The actual fidelity of the tracks is a secondary concern.

Music used to be produced by musicians playing acoustic and electric instruments. All a recording engineer had to do was put up some microphones or plug in a direct box and route them via a bunch of microphones preamps to a recording machine. Decisions were made regarding mic placement, the application of compressors, equalizers, and other signal modifiers but the sound the came from the performer was captured on analog tape or a digital recorder of some sort. The fidelity of the music being played and recorded was locked down during the original sessions. There is no going back later (with some exceptions for NoNoise or very old recordings) and restoring the fidelity of a poorly recorded track. Whatever the signal to noise ratio present during those original takes persists through the rest of the production.

This applies to the overdub sessions during which additional instrumental and vocal parts are added to the basic rhythm tracks. In fact, if the sessions are being done on analog tape, each pass of the tape over the record and playback heads reduces the fidelity of the signals. A very small amount of the oxide is scrapped away each time.

Mixing engineers (which many times are not the same people as the recording engineers) spend additional hours tweaking the quality of each sound in the overall blend. Mixing engineers are not in the business of maximizing fidelity for an individual track. Their job is to bring all of the parts into an artistic blend through the use of volume, dynamics processing, spatial distribution, and equalization. The use of reverb, delays, and other specialized processors are widely used and actually tend to blur the clarity and dull the fidelity of a track. The fidelity of a track doesn’t get better during the mixing stage of production. The tracks are made punchier and less dynamic.

So what’s left for the poor mastering engineers? It used to be that if you’d done your mixing right, the mastering engineer would have nothing to do but put the tracks in the right sequence, enter the ISRC codes, rundown the album, and output it to DDP or SONY 1630 tape for the plant. Not anymore. Mastering engineers have new digital tools that guarantee that any remaining fidelity present in the mixes is smashed into an even amplitude plateau. That’s what the record labels want because that’s what the radio stations, YouTube, and Spotify wants. It’s the SOP (standard operating procedure) and is not likely to change.

So why bother with release formats that brag about 24-bits or even higher? There’s no reason to go to 24-bits because the output from the mastering studio uses much less than 16-bits.

The opportunity to establish the fidelity of a recording happens at the start. It’s the job of the recording engineer to get it right when the music is being played.

Forward this post to a friend and help us spread the word about HD-Audio Forward this post to a friend and help us spread the word about HD-Audio

Share

About Author

Dr. AIX

Mark Waldrep, aka Dr. AIX, has been producing and engineering music for over 40 years. He learned electronics as a teenager from his HAM radio father while learning to play the guitar. Mark received the first doctorate in music composition from UCLA in 1986 for a "binaural" electronic music composition. Other advanced degrees include an MS in computer science, an MFA/MA in music, BM in music and a BA in art. As an engineer and producer, Mark has worked on projects for the Rolling Stones, 311, Tool, KISS, Blink 182, Blues Traveler, Britney Spears, the San Francisco Symphony, The Dover Quartet, Willie Nelson, Paul Williams, The Allman Brothers, Bad Company and many more. Dr. Waldrep has been an innovator when it comes to multimedia and music. He created the first enhanced CDs in the 90s, the first DVD-Videos released in the U.S., the first web-connected DVD, the first DVD-Audio title, the first music Blu-ray disc and the first 3D Music Album. Additionally, he launched the first High Definition Music Download site in 2007 called iTrax.com. A frequency speaker at audio events, author of numerous articles, Dr. Waldrep is currently writing a book on the production and reproduction of high-end music called, "High-End Audio: A Practical Guide to Production and Playback". The book should be completed in the fall of 2013.

(11) Readers Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

twelve + ten =