I’ve talked about digital sampling and the advantages of 24-bits over 16-bits here in the recent past. And there have been a few pointed comments regarding some of my assertions. I try to keep myself open to other points of few but when they run counter to my own knowledge and experience, I seek help from other knowledgeable individuals. As most of you know, one of the smartest and most experienced electrical engineer/digital system designers I know is John Siau from Benchmark Media. I wrote to him recently about some of the issues relating to word length etc and this was his response. He granted my request to post it. It’s an interesting read.
Wow, your reader is very confused! I don’t even know where to begin.
Has he read my application note [read it here] where I give the example of 1-bit DSD? If he hasn’t, it should give him some food for thought. By his accounting DSD should be capable of one or two levels and should be highly distorted. The fact that a 1-bit digital system can produce a high-quality recording debunks much of what he is claiming.
Many people get drawn into his train of thought because they envision DAC outputs as stepping from the quantization level at one sample to the quantization level at the next sample. This is known as a zero-order hold. This would generate a series of triangular-shaped error signals where one triangle occurs at each sample. These high-frequency errors can be removed with a brick wall analog lowpass filter with a cutoff frequency that is at or below the Nyquist frequency. But, no modern DAC actually works this way! All modern DACs are oversampled, and the spectrum of the error signal is well separated from the audio band. With this oversampling, virtually all of the error signal (due to sampling) is removed, and the input sample rate has no bearing on the amplitude of the error signal.
Each sample should be looked at as an impulse. Upsampling DACs insert 0-amplitude samples between each actual sample. An upsampling ratio of 256 inserts 255 0-value samples between each successive input sample. This series of impulses (most of which are 0) feed into a sin(x)/x reconstruction filter. The result is a high sample rate waveform containing the original signal plus some high frequency noise. A zero order hold is often unavoidable, but this typically occurs at 256Fs in an oversampled DAC. This implies that the noise produced by the zero-order hold starts near 256/2 * the original sample rate. This high-frequency noise is well above the Nyquist frequency of the original samples and is easily removed with a simple analog low-pass filter. The end result is a continuous band-limited waveform that is an exact replica of any band-limited input waveform. All of the remaining noise is a result of quantization errors and dither noise. If dither was properly applied before quantizing in the A/D, the errors will be random. The end result is the original continuous waveform plus white noise, where the white noise is not correlated with the audio.
All existing 24-bit A/D converters generate enough thermal noise to provide adequate dither for the initial conversion process. This thermal noise gives 24-bit A/D converters a linear response. This is not always the case with 16-bit A/D converters, or 16-bit outputs from 24-bit converters. These low-resolution converters often need to have dither added in order to achieve a linear amplitude response.
Any DSP operation the involves truncation will need an additional application of dither prior to the truncation.
The sin(x)/x filter reconstructs the original continuous band-limited signal from a series of uniformly spaced samples.
Your reader seems to have a general disregard for the Nyquist theorem. This leads to all sorts of erroneous conclusions. He seems to dismiss Nyquist as obsolete.
There is nothing obsolete about the Nyquist theorem, the physics and mathematics have not changed.