The amplitude of sound diminishes according the inverse square rule. That mathematical formula states that if you double the distance between a sound source and receptor (such as a microphone or your ear), there will be one quarter of the original energy present. If you move four times as far away then the sound level will be one sixteenth as much as it was to start. This is also true of light energy.
Figure 1 – The Inverse Square Rule (Click to enlarge)
There are all sorts of musical details generated by performers and their instruments while they play. It’s not just room reflections, echoes and reverberations that need to be captured during a recording. How about the grittiness that happens when a string player first draws their bow across a string before the string actually starts to produce a tone? Or the finger noise that happens when a guitarist moves quickly up or down the fretboard of their instrument? These are part of the aggregate sound.
If you’re a musician, you already know the intimacy of playing and instrument AND the many subtle and not so subtle noises that emanate from your instrument. If you’re not a musician, perhaps you’ve had that rare pleasure to sit very close to an acoustic performer as they strum and sing. I highly recommend that you visit a music school and search out a bassoonist. In close proximity, there is more air noise coming from the keys and holes on that instrument than there are musical notes. But when the bassoon does their thing in the middle of an orchestra and you’re sitting 40-100 feet away, all of those “extra” sonic details meld into the overall sound.
So if capturing and reproducing ALL of the sonic details of a performance is important, then the standard procedure of placing a few microphones many feet from the source needs to be rethought. When I started AIX Records, I considered that very thing. As a result, all of my recordings…including the ones I’ve done of a full 120 piece symphony orchestra…were done with stereo pairs of microphones placed close to the sound sources. Recordings of symphonic music are NOT done this way and as a result they strike me as distant and unfocused. But Hollywood film scores are done using multiple microphone techniques (they don’t use stereo as I do…oh well) and millions of people have been exposed to “close up” orchestra sounds.
For smaller ensembles, I place an ORTF (a stereo microphone techniques that places two identical mikes about 10 inches apart at a 110 degree angle) pair just inches from the front of an acoustic guitar or the speaker cabinet of an amplified guitar. If you take a look at the video of the Mozart Clarinet Quintet recording that I did a few years ago, you’ll notice a stereo pair of microphones above each of the upper strings and a stereo pair close to the cello and clarinet. You can bet that those mikes are getting a lot more “detail” than a simple pair of 5.1 array of microphones 30-50 feet away (remember the inverse square rule).
Figure 2 – The Old City String Quartet – Horn Quintet – notice the close stereo miking.
It’s easy to dial up the ambient microphones that I place some distance away from the stage and minimize any obnoxious “details” but it is impossible to put details back in if you don’t have microphones up close when you’re making a recording. No amount of compression or equalization is going to fabricate sounds that weren’t captured in the first place.
I love the sound of being close to a guitarist as they pick or strums or a piano as a skilled performer pulls all of the nuances of the instrument out for the audience. I put microphones right up close to the strings of a piano as well…and pianists and reviewers have said that the sound is “the best they’ve every heard”.
Would you hold a conversation with someone at a distance of 30 feet or more? Why do we insist on listening to recorded music that way?