The Audiophiliac, aka Steve Guttenberg, interviewed mastering engineer Dave McNair and basically asked, “what’s the biggest factor in determining the overall sound of a recording?” over at cnet (you can read the article here). The answer to the question depends on a lot of factors. As a former (can I say recovering mastering engineer of 16 years) mastering engineer and currently a producer and engineer of new high-resolution audio recordings, I believe I have something to contribute to this topic. The original recording sessions are the most important stage in the recording process in establishing high fidelity…if that matters.
And it matters to me because I’ve been trying for 15 years to produce and release recordings that avoid the traditional recording pitfalls (as I see them…over processing, heavy use of compression etc). The same considerations may not be important to other recording engineers. Their job is to produce a recording that matches the sound of the current commercial hits. The actual fidelity of the tracks is a secondary concern.
Music used to be produced by musicians playing acoustic and electric instruments. All a recording engineer had to do was put up some microphones or plug in a direct box and route them via a bunch of microphones preamps to a recording machine. Decisions were made regarding mic placement, the application of compressors, equalizers, and other signal modifiers but the sound the came from the performer was captured on analog tape or a digital recorder of some sort. The fidelity of the music being played and recorded was locked down during the original sessions. There is no going back later (with some exceptions for NoNoise or very old recordings) and restoring the fidelity of a poorly recorded track. Whatever the signal to noise ratio present during those original takes persists through the rest of the production.
This applies to the overdub sessions during which additional instrumental and vocal parts are added to the basic rhythm tracks. In fact, if the sessions are being done on analog tape, each pass of the tape over the record and playback heads reduces the fidelity of the signals. A very small amount of the oxide is scrapped away each time.
Mixing engineers (which many times are not the same people as the recording engineers) spend additional hours tweaking the quality of each sound in the overall blend. Mixing engineers are not in the business of maximizing fidelity for an individual track. Their job is to bring all of the parts into an artistic blend through the use of volume, dynamics processing, spatial distribution, and equalization. The use of reverb, delays, and other specialized processors are widely used and actually tend to blur the clarity and dull the fidelity of a track. The fidelity of a track doesn’t get better during the mixing stage of production. The tracks are made punchier and less dynamic.
So what’s left for the poor mastering engineers? It used to be that if you’d done your mixing right, the mastering engineer would have nothing to do but put the tracks in the right sequence, enter the ISRC codes, rundown the album, and output it to DDP or SONY 1630 tape for the plant. Not anymore. Mastering engineers have new digital tools that guarantee that any remaining fidelity present in the mixes is smashed into an even amplitude plateau. That’s what the record labels want because that’s what the radio stations, YouTube, and Spotify wants. It’s the SOP (standard operating procedure) and is not likely to change.
So why bother with release formats that brag about 24-bits or even higher? There’s no reason to go to 24-bits because the output from the mastering studio uses much less than 16-bits.
The opportunity to establish the fidelity of a recording happens at the start. It’s the job of the recording engineer to get it right when the music is being played.