Dr. AIX

Mark Waldrep, aka Dr. AIX, has been producing and engineering music for over 40 years. He learned electronics as a teenager from his HAM radio father while learning to play the guitar. Mark received the first doctorate in music composition from UCLA in 1986 for a "binaural" electronic music composition. Other advanced degrees include an MS in computer science, an MFA/MA in music, BM in music and a BA in art. As an engineer and producer, Mark has worked on projects for the Rolling Stones, 311, Tool, KISS, Blink 182, Blues Traveler, Britney Spears, the San Francisco Symphony, The Dover Quartet, Willie Nelson, Paul Williams, The Allman Brothers, Bad Company and many more. Dr. Waldrep has been an innovator when it comes to multimedia and music. He created the first enhanced CDs in the 90s, the first DVD-Videos released in the U.S., the first web-connected DVD, the first DVD-Audio title, the first music Blu-ray disc and the first 3D Music Album. Additionally, he launched the first High Definition Music Download site in 2007 called iTrax.com. A frequency speaker at audio events, author of numerous articles, Dr. Waldrep is currently writing a book on the production and reproduction of high-end music called, "High-End Audio: A Practical Guide to Production and Playback". The book should be completed in the fall of 2013.

12 thoughts on “Super Live Audio: Part II

  • November 25, 2014 at 12:17 pm
    Permalink

    I find their arguments to be quite strange and nonsensical.

    Reply
  • November 25, 2014 at 4:16 pm
    Permalink

    It sounds like KV2 is going with the myth that a sample rate like 44.1 kHz (or 96 kHz for that matter) is incapable of sufficient “time resolution” down to 5 microseconds. They seem to imply that stuff that happens in between samples is not captured. This myth seems to be cropping up more and more often–at least I’m seeing it more frequently in the last year or so. But it is a myth, and the sound that happens between samples is captured perfectly by 24 bits and a sample rate of 96 kHz.

    Reply
  • November 25, 2014 at 5:19 pm
    Permalink

    Not related to topic but taken by surprise
    I was reading a well-respected magazine this afternoon and I was surprised to encounter the following sentence:
    “In addition, we have improved the sound quality of our Misa Criolla by up-conversion of the original CD rip to a resolution of 192 kHz/32- bit with apodization and dither applied using iZotope RX-3 Advanced.” — Charles Zeilig, Ph.D., and Jay Clawson (TAS January 2015)
    Have I missed something?

    Reply
    • November 26, 2014 at 8:21 am
      Permalink

      These are the same guys that did the research on FLAC etc. I’ll take a look at the article…but am very doubtful about any fidelity change. They may have changed the sound of the piece and like it better but it’s never going to be high-resolution.

      Reply
    • November 26, 2014 at 3:58 pm
      Permalink

      For those who understand that digital systems are about more than amplitude and frequency response, it’s not at all hard to believe they improved the sound. Many of the better DACs around have apodizing filters. These remove the pre-ringing that is present with typical finite impulse response filters. Upsampling, and implementing an apodizing filter would allow much of the same benefit to be heard on DACs with standard FIR filters, because the pre-ringing introduced at 192 kHz is shorter, and so, probably less, or not at all, audible.

      Reply
      • November 27, 2014 at 11:10 am
        Permalink

        It may not be hard to imagine that they “changed” the sound but they most certainly did not improve the sound as you state. The fidelity of a recording is established at the time of the original source recording. I will be addressing the development, use, and effectiveness of apodizing filters. The notion that upsampling a CD specification recording to 192 kHz/24-bits using digital processing would improve its fidelity is wishful thinking. The fidelity will remain as it was at 44.1 kHz/16 bits…the sound may change and be more euphonious to those that want to tweak things away from the mastered sound, but at what cost in terms of space and bandwidth?

        Reply
        • November 28, 2014 at 5:55 am
          Permalink

          They cannot improve the bandwidth or dynamic range – no argument there –, but think about it this way: The signal is whatever it is, but you still have to play it back. Using a finite impulse response filter adds both pre-and post ringing. That ringing does not exist within the digital data, it is only created upon reconstruction into an analog waveform. In apodizing filter shifts all of the ringing to after the impulse, which is closer to the physical phenomenon of sound generation. So, the processed file is at least as faithful to the original recording as playing it back through a FIR filter. I’ll reiterate that the processing described is simply a way to let people take advantage of advances in digital filter architecture without changing their hardware.

          I don’t subscribe to that magazine, so I can only go off the quote in the above post. We could, and probably would, argue round and round about what constitutes an improvement in fidelity, but an improvement in sound can only be evaluated by listening.

          Reply
          • November 28, 2014 at 11:02 am
            Permalink

            I’m in complete agreement with you on this. We do want to get the very best reproduction from whatever files and format is captured. And using apodizing filters does maximize the accuracy of the playback.

  • November 25, 2014 at 10:23 pm
    Permalink

    Not to worry, Paul McGowan at PS Audio already has the “Product of the Year” Direct Stream DAC which converts everything imputed to DSD output. He’ll save us all whether we want it or not.

    Reply
    • November 26, 2014 at 8:21 am
      Permalink

      I saw that announcement and I know that’s he’s honestly proud of the recognition. Oh well.

      Reply
  • November 26, 2014 at 4:13 pm
    Permalink

    I believe there are studies showing that humans are not very good at detecting differences in relative phase, but do you have a source for saying that timing is less important than amplitude and frequency, or is that just your opinion? I would guess the timing differences we can most easily detect are those at the beginnings of sounds, which is different from relative phase of steady-state sine waves.

    Here are some articles relevant to the topic:
    http://boson.physics.sc.edu/~kunchur/Acoustics-papers.htm
    http://phys.org/news/2013-02-human-fourier-uncertainty-principle.html

    Reply
    • November 27, 2014 at 11:14 am
      Permalink

      The “timing” issues that were discussed in the article focused on phase differentiation. And yes, there are studies that have shown that humans are poor at perceiving phase in sine waves or complex sounds. Thanks for the links…I’ll take a look.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

2 + twelve =