The engineers at KV2 are concerned with producing high SPLs without distortion in a hall or large concert venue. They don’t have to record and then playback music because they design and deliver live sound. But they do want to process the sound in the digital domain in order to minimize distortion of various kinds and maintain very high timing accuracy.
The traditional analog signal path from microphones and direct boxes into an analog console and then on to an array of amplifiers and speakers can deliver amplified music and vocals very accurately. In fact, most live concert PA systems have purely analog signal paths but are under the control of digital recall and automation systems. So why would KV2 be so proud of their “new format”, which depends on converting the analog output signals into DSD (at 20 MHZ or 7 times faster than 2.8224 MHz).
The ideal system keeps the analog signals…the microphones and other instrumentals plugged directly into the console…in the analog domain. The correct place for digital technology is in the control of various analog parameters such as switch positions, fader levels, panning, mutes, EQ etc. A modern concert is a preprogrammed set of digital cues that configure the console for each tune in turn. The balancing of levels and “timbres” happens during the sound check and before.
The KV2 website endorses a three fold description of audio. The paragraph from their piece reads:
Sound is a three dimensional object consisting of three primary parameters, these are:
1 – The level
2 – The frequency
3 – Time
The common hearing range of the human ear is from 0 to 120 dB of the signal level, the frequency range is from 20Hz to 20kHz, but it is often neglected to recognize the importance of resolution in time. Human hearing is able to recognize time definition, (the difference in incoming sound), up to 10μs,however the latest research has found, that it is even less (5μs).
They include the following illustration:
Figure 1 – A graphic from KV2 audio showing their “definition of sound”.
It’s interesting to see how they define sound. The opening sentence, “The common hearing range of the human ear is from 0 to 120 dB of the signal level”, which besides being a little stilted with regards to the English language is inaccurate. Human hearing can perceive a very wide range of amplitudes. Traditionally, the maximum amplitude that can be “heard” without causing pain and hearing damage is 130 dB SPL…that means it’s referenced to actual energy in the air. We’ve talked about this as dynamic range because we want to maximize the difference or relationship between the quietest sound and the loudest sounds in a selection of recorded music. These days DACs can deliver up to 130 dB of range but very few recordings deliver that much…virtually none. And the range at a live concert is much, much less. Even at home or in a studio, numbers above 80-90 are rare.
They cite the 20-20 kHz frequency range, which has been the traditional range used in most references. Only recently have their been indications that ultrasonics may play a role in human hearing.
Then there’s the “often neglected…importance of resolution in time”, according to KV2. “Human hearing is able to recognize time definition (the difference in incoming sound)”, according to their site.
Time in acoustics is about phase relationships. How much acuity do we have when hearing two sounds arriving at different times? It turns out that we’re very good at this…down to 5-10 microseconds. But just how important is the delta in arrivals times and the other aspects of sound?
We hear sounds arriving at different times everyday. The reflections of source sounds off of the floor, the walls, and objects in a space create a myriad of different arrival times. It’s part of ambiance and reverberation etc. Obviously, we want two speakers to deliver sounds with accurate phase but the effects of phase on the actual fidelity of a piece of music are not as great as the frequency or dynamic parameters.
Timing and phase are not equal partners to frequency and amplitude.