Dr. AIX's POSTSFORMATS

Taming the Terminology: Lossy vs. Lossless

Can a PCM digital audio file at any sampling rate and word length ever level be considered “lossy”? Despite a recent article on a popular audiophile website, the answer is an unambiguous NO…never! By definition Pulse Code Modulation is a lossless digital encoding scheme that preserves 100% of the original signal given the right filtering and parameters. How the simplest terms can get twisted and recast by casual — and uninformed audio writers — is not really surprising but problematic for readers looking for accurate information. Let’s briefly take a look at these terms and explore the underlying fallacy put forth in the piece.

The terms lossy and lossless ONLY apply to compressed audio formats like MP3, AAC, AC3, and MQA. Lossy means that some of the source audio information is lost — forever — when the compressed digital audio information in the file is converted back into an analog signal. If the process doesn’t restore 100% of the original signal — which a properly designed PCM digital encode can do — then it is considered “lossy”. It really doesn’t matter if listeners can detect that there is information missing. I’ll grant you that it can be very difficult to perceive the difference between a lossy algorithm and a lossless one. In fact, some listeners may prefer an encoding technique that “masks” some low amplitude signal or “folds” ultrasonic partials under the “in band” information. However, one’s subjective preference doesn’t change the technical reality.

A truly lossless audio format like MLP (Meridian Lossless Packing by the same folks that are behind MQA) — now known as Dolby TrueHD due to a licensing arrangement with the SF-based company — guarantees that every single digital bit of information that was present at the input of the encoder is present at the output of the decoder. When the DVD Forum was casting around for a scheme to reduce the bandwidth needed for 6 channels of 96 kHz/24-bit (high-resolution surround sound) on a single speed DVD disc, the very bright people at Meridian accepted the challenge, demonstrated their amazing process, and were awarded the exclusive contract for the DVD-Audio format. MLP, Dolby THD, FLAC, FLAC, DTS HD Master Audio and a few other codecs have demonstrated their ability to restore the bits from the source to the destination.

One of the examples discussed in the article focused on the fact that many — if not most — new recordings are being done using sample rates and word lengths higher than 44.1 / 16-bits. In fact, iTunes requires that labels submit their digital masters at 96/24. If the engineer at the studio captures the sessions at 48 kHz/24-bits and then downconverts those recordings to 44.1 kHz/16-bits, “then the CD version would thus in effect be lossy”. Wrong! The downsampled CD version would still be lossless because as I pointed out above, there is no such thing as “lossy” PCM. The Redbook specification for compact discs doesn’t include a chapter on data compression or codecs.

The process of downconverting or downsampling can be done a number of different ways. Usually, a software program does the conversion in real time or out of real time depending on the complexity of the algorithm. It’s possible to downconvert from a higher sample rate and longer words during an analog transfer. The output of a DAC is passed to the input of an ADC running with a slower clock. However, in all of these processes the end result is still lossless.

The confusion may lie in the notion that there is a difference between the original master and the final CD specification audio. If a 96 kHz/24-bit PCM master is “mastered” for CD release, some of the data of the original recording will not make it onto the final CD. That’s a fact. But that still doesn’t mean that we should call the CD a lossy format. But it gets even more confusing when you understand why recording engineers use higher sample rates of longer words in the first place. I’ve written about this before. There is increased “headroom” available at the time of the original recording offered by the additional 8-bits. Recording engineers don’t want to exceed the available headroom of their recording system. We had limits during the analog tape days and we still have limits today — thankfully high-resolution has brought great potential fidelity. However, mastering engineers have a different task than the recording and mixing engineers. Their job is to adjust the tonal balances, reduce the dynamic range, and increase the amplitude of the overall track. The reduction of dynamic range and increase of amplitude takes the original 24-bits down to 8-10 bits — remember that each digital bit is roughly equivalent to 6 dB of dynamic range.

So even the master won’t be different when downconverted from 24-bits to 16-bits because the mastering engineers already knocked off the extra 8-bits. That’s just the way it is for most commercial releases — including jazz and classical titles.

CD are not lossy. There should never have been any doubt.

Dr. AIX

Mark Waldrep, aka Dr. AIX, has been producing and engineering music for over 40 years. He learned electronics as a teenager from his HAM radio father while learning to play the guitar. Mark received the first doctorate in music composition from UCLA in 1986 for a "binaural" electronic music composition. Other advanced degrees include an MS in computer science, an MFA/MA in music, BM in music and a BA in art. As an engineer and producer, Mark has worked on projects for the Rolling Stones, 311, Tool, KISS, Blink 182, Blues Traveler, Britney Spears, the San Francisco Symphony, The Dover Quartet, Willie Nelson, Paul Williams, The Allman Brothers, Bad Company and many more. Dr. Waldrep has been an innovator when it comes to multimedia and music. He created the first enhanced CDs in the 90s, the first DVD-Videos released in the U.S., the first web-connected DVD, the first DVD-Audio title, the first music Blu-ray disc and the first 3D Music Album. Additionally, he launched the first High Definition Music Download site in 2007 called iTrax.com. A frequency speaker at audio events, author of numerous articles, Dr. Waldrep is currently writing a book on the production and reproduction of high-end music called, "High-End Audio: A Practical Guide to Production and Playback". The book should be completed in the fall of 2013.

10 thoughts on “Taming the Terminology: Lossy vs. Lossless

  • An audio with variable bitrate cannot be called lossless. Moreover, a codec usually implies good compression. So, unfortunately, there is only one such named OptimFROG while others ain’t real lossless codecs, indeed.

    Reply
    • Not true. Any codec that can deliver all digital information from source to destination is lossless. MLP and FLAC are lossless codecs.

      Reply
      • So, you believe that a ‘losslessly’ compressed audio file with variable bitrate is lossless?.. But how can it be purely lossless when the bitrate falls below the original file bitrate which is of course stable ?

        Reply
        • The bitrate has no effect on whether something is lossy or lossless. The only thing that matters is whether all of the data is preserved from source to destination.

          Reply
          • Obviously, lower bitrates, no matter how unaffected the signal thus gets, could only come from silence periods on the recording; otherwise, the codec being used is lossy. That means simply that the codec, be it MLP or, unfortunately, FLAC tampers with the entire digital stream. That’s why OptimFROG is positioned as ZIP-like compression, but I suspect that even OptimFROG somehow meddles in the audio file.

  • david gregory

    I’m confused… In your book, in the chapter on MQA, you refer to the truncation of the data stream from 24-bits to 17-bits as “a lossy process.” I realize this article isn’t about MQA, but isn’t truncation truncation, and aren’t the effects the same?

    Maybe only partially related question, but I have seen some other articles by recording pros strongly recommending the use of dither anytime the bit depth of a recording is altered, the stated rationale being the avoidance of quantization noise and/or distortion at low levels. Did you do this with the CD releases of your recordings that were originally 24/96? Sounds like the answer would be ‘No’ based on this column. Can you shed some further light on these questions? Thanks.

    Reply
    • Dither is a necessary part of downconverting. I never made CD versions of my recordings except in rare instances and I did use dither during the conversions. The key to this lossy vs lossless debate is to establish what you start with and what you end up with. MQA is a lossy codec because information that was present in the original source (for example my high-resolution tracks) is lost after the MQA reconstruction. Their 24-bits to 17-bits is a lossy procedure. My point about MQA being a solution to a non problem is that there isn’t any 24-bit or ultrasonic content coming from the usual sources. So the sources from which MQA is working aren’t getting any benefits. The only labels that will benefit are ones like mine. But it’s being sold as a fidelity improvement to the big boys.

      Reply
  • John Deas

    Hi Mark,

    I thought Mp3, AAC, WMA were all PCM based digital audio formats? Isn’t the ‘lossy’ aspect that they by their design leave out data if a ‘higher’ format is converted into them? The point being it’s the conversion of one format to another that leads to the terminology of lossless, lossless compressed or lossy i.e if you studio recorded directly to mp3 you get what you get with the limitation of the format but it’s not actually ‘lossy’ in itself it just cannot capture the same detail as a higher definition sampling can?

    Cheers.

    Reply
  • John Duncan

    I believe you are mistaken when you say, “The reduction of dynamic range and increase of amplitude takes the original 24-bits down to 8-10 bits — remember that each digital bit is roughly equivalent to 6 dB of dynamic range.”

    To illustrate, imagine a loud signal that is very dynamically crushed and where each wave cycle peaks between 32760 and 32768 (on a 16-bit quantizing scale that runs from -32768 to +32768). Each wave cycle must still pass through at least all of the points between -32760 and +32760. Any of those values in between may need to be sampled depending on where in the wave cycle the sampler samples. This requires all 16 bits even though the dynamic range of the signal is maybe only 1 or 2 dB. The only time dynamic compression results in lowering the number of bits is when the signal is made extremely quiet. If the amplitude peaks at 7 on the quantizing scale then the sampler only needs 4 bits to encode every possible amplitude in the wave cycle (-7 to +7).

    Reply
    • Admin

      John, I understand your argument but I’m referring to the net dynamic range of the system after the compression applied by a mastering engineer. If each 1 gives us roughly 6 dB of dynamic range, then a recording that measure 48-50 dB would be using only 7-8 bits of the potential dynamic range. 7-8 bits is equivalent to about 42-48 dB of range. The digitally encoded signal may traverse all of the intervening values (in your 16 bit model) but the ultimate reduction in dynamic range is equivalent to a system using fewer bits.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *