Dr. AIX's POSTS

My Score Was 0%…a Perfect Success!

Lately, I’ve been working with Scott Wilkinson over at AVS Forum to create a quasi-meaningful test to determine whether people with very good playback systems can “perceive” the difference between a bona fide 96 kHz/24-bit track and the downconverted 44.1 kHz/16-bit version of the same track. We’ve just about ready to launch the thing as soon as the technical people at AVS Forum figure out the right place to put the test files. This is certainly not going to be the rigorous test that I’m planning but we thought it would be interesting to let readers participate in a survey that actually could provide some useful feedback.

I’ve seen a number of surveys lately the attempt to determine whether people can “hear” the difference between high-resolution audio and standard compressed files or the difference between 16 and 24-bit audio. But just as the Boston Audio Society study failed because none of the content evaluated during their study were real high-resolution audio, the current surveys are using content downloaded from the web that says that it is high-resolution but in reality isn’t. So Scott and I decided to try a similar test using a few of my recordings that actually do have ultrasonics and dynamic range in excess of standard CDs.

I pulled three tracks from my catalog. There’s the “Mosaic” track that I’ve shared repeatedly with readers, a track from Steve March Torme and a big band that has a dynamic range above 100 dB as well as ultrasonics AND a track by trumpeter Wallace Roney and his band playing with a Harmon mute, which produces lots of energy above 20 kHz.

The plan is to provide these tracks at 96/24 and 44.1/16 and let users compare them using the ABX methods provided by Foobar (PC) and ABX Tester (Mac). He’s written up the whole thing and will launch it on AVS Forum very shortly. I have the spectrograms and audio as well and will make them available as soon as the thing goes live on AVS.

But in the midst of preparing the files, I downloaded the ABX Test app for my Mac to check out how it works. The program allows you to associate “A” and “B” audio tracks with the program and then randomly builds 5 “X” items. You can listen to “A” or “B” as many times as you want from anywhere in the track and then listen to the “X” item. Based on what you hear, you choose whether you believe the track is the “A” or “B” version.

So I took the test myself yesterday. The first track that I plugged in to the ABX Tester was the “On the Street Where You Live” by Steve March Torme. This track has a tremendous dynamic range and lots of ultrasonics. I know that the last 40 seconds contains the highest dynamic range so I concentrated on that section as I did the test. I listened to all 5 “X” items and made my choices.

The system I used consisted of my tower Mac connected via USB to a Benchmark DAC1 to a set of Oppo PM-1 headphones. I didn’t really go to any effort to maximize my situation…after all I just wanted to check out the ABX Tester app. But this equipment is more than capable of delivering great reproduction.

To my utter disappointment and complete surprise, I failed miserably to correctly identify the high-resolution version. When I clicked on the “Check Answers” button, the window popped up and told me that I got 0% right. My instantaneous reaction was complete shock. I could see my life’s work going down the toilet.

Then I thought a moment. The fact that I got them ALL wrong, merely indicated that I swapped the high-resolution and standard resolution files during the test. Getting 0% is the same as getting 100%! I was able to distinguish the difference between the two files every time…I simply reversed the “A” and “B” source files.

Even after this realization, I was genuinely surprised that I scored a perfect 0%. It’s true that I know what to listen for and clearly heard the difference in my tracks…but I didn’t think I would get this results the first time out.

You can do this today with the “Mosaic” track that is on the FTP site. Download ABX Tester and try it yourself. Can you get 5 out of 5? I did…and I’m an old man with aging ears.

Dr. AIX

Mark Waldrep, aka Dr. AIX, has been producing and engineering music for over 40 years. He learned electronics as a teenager from his HAM radio father while learning to play the guitar. Mark received the first doctorate in music composition from UCLA in 1986 for a "binaural" electronic music composition. Other advanced degrees include an MS in computer science, an MFA/MA in music, BM in music and a BA in art. As an engineer and producer, Mark has worked on projects for the Rolling Stones, 311, Tool, KISS, Blink 182, Blues Traveler, Britney Spears, the San Francisco Symphony, The Dover Quartet, Willie Nelson, Paul Williams, The Allman Brothers, Bad Company and many more. Dr. Waldrep has been an innovator when it comes to multimedia and music. He created the first enhanced CDs in the 90s, the first DVD-Videos released in the U.S., the first web-connected DVD, the first DVD-Audio title, the first music Blu-ray disc and the first 3D Music Album. Additionally, he launched the first High Definition Music Download site in 2007 called iTrax.com. A frequency speaker at audio events, author of numerous articles, Dr. Waldrep is currently writing a book on the production and reproduction of high-end music called, "High-End Audio: A Practical Guide to Production and Playback". The book should be completed in the fall of 2013.

34 thoughts on “My Score Was 0%…a Perfect Success!

  • Since the frequency response of the PM-1 is down quite a lot by 20 kHz — see http://stereos.about.com/od/Measurements/ss/Oppo-Digital-PM-1-Headphone-Measurements.htm –, that suggests the ultrasonics are not the key difference. Unless you’re listening at a level where the peaks exceed 100 dB SPL — very loud –, the bit depth shouldn’t matter.

    That leads us to the interesting question of why the 24/96 tracks sound better. I’ve always thought it had to do with moving the antialiasing filter far outside the traditional audio band, which is consistent with why most listeners also report DSD and high-resolution PCM transfers of analog tape sound better than the same material on CD.

    Reply
    • Interesting point. To test the ultrasonics component more specifically then, couldn’t some type of filter be applied to the 96/24 track to remove frequencies that wouldn’t be contained in the Redbook standard? I’m not an audio engineer, but from an experimental standpoint, this would be preferred as it would be manipulating only one variable at a time.
      Then, if the difference Mark detected was lost, it might not be the ultrasonics per se, but either something else caused by downsampling that reduces fidelity or the 24 to 16-bit conversion.

      Reply
  • Thanks for this interesting test.

    In my experience USB DACs can be streaming mode sensitive (Kernel Streaming, WASAPI, ASIO) and PC Driver sensitive to buffering and latency.

    Perhaps another interesting test would be to down-sample your HD recordings to 44.1/16 and up-sample them back to the rate and depth of the original file, 96/24 or whatever.

    In this way, the DAC used for testing receives the same bit-rate and bit-depth, removing that variable from the test.

    – Rich

    Reply
  • Now try it without headphones and let us know your score. When listening to one of your tracks through my hd600, and listening intently for high frequency cues, I could perceive a difference. With my b&w 804’s, not so much.

    Reply
  • Why don’t you provide 96/24, 96/16, 44.1/24, and 44.1/16?

    Reply
    • I can do that…stay tuned.

      Reply
  • Sorry, but you didn’t score 100% correctly after all. As with all ABX tests, ABXTester simply determines if you can discern a difference by forcing you to match X to either A or B. It does not matter which file you load in which position, or if you swapped them up, misnamed them, whatever. If you scored a “0”, that means you couldn’t correctly identify the choice X was identical to in any of the tests you ran.

    It’s a small but key point: you couldn’t just fail to identify the high resolution track, since that’s not really the point. You couldn’t match either the high resolution or down-sampled track to it’s identical choice. An ABX test determines audibility of difference only.

    Another small but key point: 5 tests is statistically insignificant. You need to run at least 20 or more for adequate statistical resolution, especially when the differences are small enough to be somewhat random. I’ve always puzzled as to why ABXTester only provides 5 tests. You can always run ABXTester five times and compile your own result, but that’s what’s necessary.

    But don’t despair, as an ABX Tester, ABXTester is pretty much useless. It’s impossible to make seamless, synchronized switches, which is essential for this kind of comparison. Auditory memory is painfully short. Fractions of seconds between comparisons pretty much destroy the resolution of the test, and ABXTester has no way to accomplish a quick switch between files, much less do it while they both play in sync. Test results with ABXTester are almost certainly very low resolution. I wouldn’t bother with it. The ABX process in Foobar is better. The switching is fairly fast, but there is a short gap between choices. Foobar has unlimited trials, and compiles the results immediately, which is actually NOT good as it presents an uncontrolled bias. As you see your failure rate increase, stress is increased which could affect the test results. Foobar can be run in a virtualized PC environment on a Mac. However, currently, there is no proper ABX comparator software available for a properly controlled ABX test of this sort, at least in part because it’s not possible to instantly and seamlessly switch data sampling rates fed to a single DAC. The test technically requires two identical DACs and computers with an analog ABX comparator, and even then we are comparing only what that particular DAC does at different rates.

    ABX Testing is NOT trivial, nor is it easily accomplished with any meaningful accuracy. I’m not in favor of an AVS Forum hosted test of this kind because of the complete lack of control. If ABX test results are to be meaningful, control of all variables is an absolute requirement. As an example, suggesting that Mac users use ABXTester and PC users use Foobar pretty much pollutes the stats.

    If what you are trying to prove is that 24/96 files sound different than 16/44 files, throwing it out to the masses will only generate noisy statistics, likely spun towards a cultural or popular bias.

    Reply
    • I guess I would have to delve into the mechanism behind the ABX Tester a little further to understand your point. If I missed everyone but saying A was B or visa versa and therefore received 0%…doesn’t that mean that I got them exactly backwards? I don’t have much time today…racing to getting everything ready for my trip to New York and CE Week.

      I do recognize that this whole online testing thing is not rigorous or meaningful. Both Scott and I recognize this…it seems like a fun thing to do. I take no real stock in my own 5 for 5 misses or any other results from the test.

      Reply
      • If you insist in a AVS hosted ABX test, then at least require testers to submit the number of trials, and require a minimum of 20 for submission. Simply ignore the “I got it right/wrong” submissions. At least there would be enough data for basic test resolution. It would be best to standardize on the ABX comparator method if you can.

        Reply
        • I’ll pass your suggestions along to Scott…thanks.

          Reply
        • Steven Sullivan

          16 trials per test would actually suffice and be a bit less fatiguing. In any case testers should pre-decide how many trials to run, and do just that many. Not just watch the ticker and stop when they reach ‘significance’ (which is possible to do with software that tells you your score incrementally). If they do more than one set of trials, results should be combined. No cherry picking allowed.

          Some good guidelines in the first two posts here:

          http://www.hydrogenaud.io/forums/index.php?showtopic=16295

          Reply
    • I agree with pretty much everything Jim stated, Mark. The only correction I’d make is that the foobar2000 ABX component has a “hide results” checkbox to eliminate instantaneous feedback that I agree can be distracting if not checked. A simple but effective video using the foobar2000 ABX component is available here: https://www.youtube.com/watch?v=jt7GyFW4hOI

      On a Mac I recommend you create a new hard drive partition, install Windows 7 or later, and run foobar2000 from there for the most stable and bitperfect playback (I’ve done this, so if you try this and have any problems, feel free to email me). Playback via a VM may also provide both but I don’t have direct experience running foobar2000 in a VM on a Mac. People report success for all but the most esoteric components with Wine, which is a free VM available at http://www.winehq.org/download/

      Lastly, I think I think the test you and Scott have come up with is “neat” (a technical term). While everyone knows it’s not rigorous, it’s thoughtful and stimulates thoughts and for those reasons alone I like it. Jim has a point about different ABX test tools, but rather than make everything useless, I’d take the opportunity to use different ABX tools and see if there are meaningful differences between the results from the use of different tools. For example, many theorize that small delays and clicks between tracks is a problem that affects ABX results, but I know of no controlled study that actually verifies or even supports that, so informal comparison using different ABX tools can only stimulate more thoughts. Sadly, I don;t have a system that can produce ultrasonics, so I don’t qualify for the avsforum informal test.

      Don’t feel bad as you likely edit this article when you get back. I love the saying “anything worth doing is worth doing with recoverable mistakes until you get it right”. I use that in business constantly to fend off the idle critics potshotting the people trying to improve things. Enjoy the trip!

      Reply
  • 5 trials is just not enough. At least 10, better yet 15-20 trials for a meaningful DBT. Still 5 correct could indicate difference – but it’s nowhere near conclusive. Guessing heads or tails right 5 times in a row is not really difficult. Also different sample and bit rates create uncontrolled variables when using a real playback system. We know, that the theoretical difference is ONLY the high frequency content, so that is the variable to test. To do that, we need level matched files with the same sample and bit rate, but different high frequency content.

    Reply
    • There is only a 3.125% chance of guessing correctly 5 out of 5 times.

      Reply
      • I feel pretty good about the results…but obviously, a real study needs to be done.

        Reply
  • Camilo Rodriguez

    Hi Mark,

    I listened to the AIX sample tracks and compared the 16/44.1 vs the 24/96 files, and I have to say it’s not easy to distinguish them. I didn’t do a proper ABX test, so I think I might just have heard what I wanted to hear. At least I believe I could tell the difference on Mosaic, and especially listening to the chimes towards the end of the track. I believe I can make out the MP3 part easier, but without an ABX test I could be fooling myself. Just as Borden comments above, I believe to hear distinct differences with headphones, but with speakers I simply fail to do so (I used a Benchmark DAC1 HDR as D/A converter, a Violectric V100 headphone Amp, a pair of AKG K702s and a pair of Sennheiser HD 800s, and for speakers I used a NAD C 390DD and a pair of KEF Q900s). I also believe it makes it harder for me when there is just one guitar on most of the track, I think I would have more to listen to with a larger ensemble.

    I think the importance of taking the care of providing the appropriate files and recorded content for a test is paramount, but I also think the playback system plays an equally important role. You have made reference to the Meyer & Moran test before, and both regarding the files and the playback system used as being unappropriate for the tests they carried out. I also believe John Siau has a valid and important point when he writes: “Anyone who thinks they can hear the difference between 16-bit and 24-bit digital audio through a “17-bit” power amplifier is fooling themselves”, and that: “If [y]our playback system can’t resolve anything better than CD quality, then “High-Resolution Audio” will remain an illusion.” In that sense, which would you consider is the minimum performance a system (be it an entire stereo rig or a desktop system for headphones) has to deliver for the test to actually be feasible?

    On a related topic, and regarding the dynamic range of 16 bit audio, I recently re-read a – by now well known – article at xiph.org, entitled “24/192 Music Downloads …and why they make no sense” (http://people.xiph.org/~xiphmont/demo/neil-young.html). There’s one passage that I find particularly interesting, entitled “the Dynamic range of 16 Bits”, which wants to demonstrate that 16 bits isn’t limited to 96 dB. I’d love to hear your thoughts on it. The article provides two samples of test tones, 1kHz tone at 0 dB (16 bit / 48kHz WAV), and 1kHz tone at -105 dB (16 bit / 48kHz WAV) respectively, and a Spectral analysis of a -105dB tone encoded as 16 bit / 48kHz PCM. The conclusion the article draws is that:

    “Thus, 16 bit audio can go considerably deeper than 96dB. With use of shaped dither, which moves quantization noise energy into frequencies where it’s harder to hear, the effective dynamic range of 16 bit audio reaches 120dB in practice [13], more than fifteen times deeper than the 96dB claim.”

    According to the article, the more than sufficient resolution of CDs would render 24 bit pointless.

    Cheers

    Reply
    • Monty is correct about the potential of 16-bits to do better than 93 dB but the fact is no one is doing noise shaping on 16-bit PCM so it is a moot point.

      Reply
      • Steven Sullivan

        “Monty is correct about the potential of 16-bits to do better than 93 dB but the fact is no one is doing noise shaping on 16-bit PCM so it is a moot point.”

        Says who? Are claiming that CD audio is never downcoverted from ‘high rez’ to dithered, noise shaped 16-bit?

        Reply
        • There seems to be some confusion about this point. I wasn’t referring to high-resolution sources being converted to CD standard with dither and noise shaping. I believe what Monty was talking about was that 16-bit PCM recording can reach upwards of 120 dB or SNR using noise shaping inside of a 16-bit word, which is true. My point was that there aren’t any engineers and systems that I know of that are actually doing it.

          Reply
          • Steven Sullivan

            So, Monty is right about the high DR capability of downconverted high rez, and also right about the high DR capability of noise-shaped ‘native’ 16 bit , but can be discounted on at least the latter point because noise-shaped ‘native’ 16-bit recordings are rare? That’s your argument?

            Can I discount hi rez then because recordings that actually exploit the >96dB DR capability are rare there too?

            If it takes the most exotic and unusual recordings to actually show off the touted benefits of high rez over Redbook (those benefits that have remained stubbornly *non obvious* and *subtle* even in the best controlled tests), what is the fuss about, really?

  • RONALDO FRANCHINI

    I have some doubts that I ask you to clear them up. I own an Oppo BDP-105D Darbee Edition and a Benchmark DAC2HCG. What is the difference in sound resolution and perceived quality when listening to the audio transferred to a pen drive from a high resolution source like a 24-96 or 24-192 audio file and output from the Oppo balanced outputs compared to the same audio from the Benchmark DAC connected to the Oppo digital outputs? Since I suppose the DACs at both equipments are the same, the Sabre 32 Reference DAC ES9018, I assume the quality is the same, isn’t it? I am very sure that the Oppo does not down convert the digital input from a HR digital file to 16-44 at the digital outputs, or that is only true for digital files and not for SACDs, DVD Audio and Blu-ray discs. Another doubt, how can I extract the DSD digital audio stream from a SACD using the Oppo? Can I perform that only using the Oppo HDMI output and the Kanex Pro HDMI? These matters are very confusing for the ordinary audio enthusiast since the manufacturers do not disclose all the equipment features.

    Reply
    • Ronaldo…I only have a few moments today. The fact is more than just the chip in the Benchmark of Oppo 105. The Benchmark is professional state-of-the-art piece of gear…the Oppo is a first class consumer piece.

      Reply
  • bill dorsey

    Interesting? I understand that your test is slightly different, but her point has to be incorporated into the discussionin my opinion…

    Why ABX Testing Usually Produces Null Results with Human Subjects
    by Teresa Goodwin

    ABX testing has been extremely controversial since it was introduced decades ago. It is my hope to put this testing protocol in its final resting place by combining its sad history coupled the difference between how humans perceive sound and how ABX testing actually is applied.

    The “X” in the ABX is either A or B, randomly selected, the listener needs to identify whether that “X” is “A” or “B”. Unfortunately human beings do not have the ability to compare three sonic events sequentially. One must keep a sonic memory of sample “A” they just listened to so they can compare it to the sample “B” and then listen to “X” and try to decide if it sounds more like “A” or “B”. It is the introduction of this third sound that makes it impossible for human beings since we can compare two different sounds as long as we don’t wait too long however our sonic memory cannot juggle three no matter how many times one is allowed to go back and forth. Thus ABX tests usually get null results, and cause listening fatigue.

    The better way to do this is to play “A” in a relaxed setting for an entire piece of music, at least five minutes and then play the same piece of music with “B” and then ask not if they sound different, but instead which one did you like? This is how most people shop for stereo equipment. Thus, it is not the methodology of ABX tests I object to, but instead their very existence.

    Since the introduction of ABX double blind testing protocols many decades ago I have known they were complete and utter frauds and that is one reason I started my print newsletter in the 1980’s and later my blog “The Audio Iconoclast” http://audioiconoclast.blogspot.com/

    From its purpose statement “The Audio Iconoclast will challenge many deeply held beliefs in both the audio and musical communities. In music and its reproduction explaining what one hears when it is not directly measurable is not easy, the common practice is to dismiss it. This is wrong! In our world of music enjoyment there are subjectivists “music listeners” and objectivists “audio scientists” who try to measure phenomenon. Music listeners believe what they hear with their ears. Audio scientists do not believe what they hear unless they can quantify and measure it. If they cannot measure it, it does not exist and they convince themselves they are not hearing what they hear! My quest is to show the wisdom of enjoying the sound of music and accepting what one hears, even if it cannot be scientifically proven.”

    For example SACD would have replaced CD by now if not for ABX tests and pseudo-scientific studies in AES papers. Anyone possessing a pair of ears on the sides of their head can clearly hear the huge difference between low and high resolution for themselves however too many of them have been brainwashed not to believe their own ears and instead rely on these pseudo-scientists who over the years with ABX double-blind testing have proven:

    1) All amplifiers sound the same.

    2) All CD players sound the same.

    3) A coat hanger sounds the same as an expensive interconnect.

    4) MP3 sounds the same as CD.

    5) CD sounds the same as SACD.

    6) High resolution PCM sounds the same as DSD.

    Remember the infamous 1987 blind listening test conducted by Stereo Review that concluded that a pair of Mark Levinson monoblocks, an output-transformerless tubed amplifier, and a $220 Pioneer receiver were all sonically identical?

    Or perhaps you remember the ABX test that was likely the most damaging to these pseudo-scientists, comparing a known audibly defective amplifier to a perfectly working one? All listeners were able not only to hear the defect in the amplifier but able to describe it’s distorted sound under normal listening conditions. However using ABX testing protocols none were able to identify the difference between the defective and the working amplifier with any statistical significance, thus proving beyond a shadow of a doubt that ABX testing does not work. In addition I am sure none of the participants would be willing to take home the defective amplifier, I am quite sure they would all want the perfect working one!

    ABX double-blind testing should be banned by all intelligent people as the absolute scam it is. If one cannot prove the differences people experience every day of their lives when listening to music they love then any such tests are total and complete failures. But more dangerous than that they are hurting the sales of superior audio equipment, superior recordings and the musical satisfaction of gullible music lovers who believe these tests instead of their own ears because of the scientific garb they are dressed in. It is time for the real motives of the anti-high resolution crowd be revealed and their rhetoric buried forever so the masses can actually listen to high resolution with unbiased ears!

    In summery ABX double-blind testing does not prove that everything sounds the same as real sonic differences are easily heard in casual listening. No, instead what ABX double-blind testing proves is that human subjects do not have the ability to compare three sonic events sequentially with any statistically significance, revealing a deficiency in short-term sonic memories of our species.

    I believe even the most golden-eared audiophiles would not be able to identify differences with any statistical accuracy between an MP3 music file and a professional master recording using ABX double-blind testing protocols. This does not mean we should all only listen to MP3s on the cheapest stereos we can find, quite the contrary it means that we should enjoy the highest resolution music possible on audio equipment that we have determined to sound the best using our ears in standard casual listening evaluations.

    Further reading:

    ABX test

    http://en.wikipedia.org/wiki/ABX_test

    Blind Listening Tests are Flawed: An Editorial by Robert Harley

    http://www.avguide.com/forums/blind-listening-tests-are-flawed-editorial

    Reply
  • Steven Sullivan

    “But just as the Boston Audio Society study failed because none of the content evaluated during their study were real high-resolution audio

    That’s not true.

    Reply
    • Steven…can you identify any of the source materials that the BAS used during their study that was a bona fide high-resolution track? I’ve seen the list that David posted and none of those albums was contained any dynamic range or frequencies above that of a standard CD. If you claim it’s not true then please substantiate your statement.

      Reply
      • Steven Sullivan

        Please substantiate *yours*, sir.

        From the BAS website

        “While this list is not complete, most of the tests were done using these discs.

        Patricia Barber – Nightclub (Mobile Fidelity UDSACD 2004)
        Chesky: Various — An Introduction to SACD (SACD204)
        Chesky: Various — Super Audio Collection & Professional Test Disc (CHDVD 171)
        Stephen Hartke: Tituli/Cathedral in the Thrashing Rain; Hilliard Ensemble/Crockett (ECM New Series 1861, cat. no. 476 1155, SACD)
        Bach Concertos: Perahia et al; Sony SACD
        Mozart Piano Concertos: Perahia, Sony SACD
        Kimber Kable: Purity, an Inspirational Collection SACD T Minus 5 Vocal Band, no cat. #
        Tony Overwater: Op SACD (Turtle Records TRSA 0008)
        McCoy Tyner Illuminati SACD (Telarc 63599)
        Pink Floyd, Dark Side of the Moon SACD (Capitol/EMI 82136)
        Steely Dan, Gaucho, Geffen SACD
        Alan Parsons, I, Robot DVD-A (Chesky CHDD 2003)
        BSO, Saint-Saens, Organ Symphony SACD (RCA 82876-61387-2 RE1)
        Carlos Heredia, Gypsy Flamenco SACD (Chesky SACD266)
        Shakespeare in Song, Phoenix Bach Choir, Bruffy, SACD (Chandos CHSA 5031)
        Livingston Taylor, Ink SACD (Chesky SACD253)
        The Persuasions, The Persuasions Sing the Beatles, SACD (Chesky SACD244)
        Steely Dan, Two Against Nature, DVD-A (24,96) Giant Records 9 24719-9
        McCoy Tyner with Stanley Clark and Al Foster, Telarc SACD 3488”

        So, you are saying that *none* of those is truly ‘high rez’. The way you do that is twofold: to set an almost impossibly rare standard of having ‘dynamic range above that of CD’ and a more attainable standard of having frequencies above 22kHz. Regarding that first goalpost, how many commercial recordings *in existence* actually span more than 96dB (much less the 118dB that CD can actually offer, with dither and noise-shaping) ? Regarding the second, did you actually test all of those recordings for spectral content above 22kHz? I have seen such content in a variety of DVDAs and SACDs.

        Let’s examine that first goalpost again. Commercial recordings (including ‘classical’) with DR exceeding 96dB are quite rare if extant at all. Yet that has never, ever stopped ‘high end’ fans from swooning over SACDs and DVDA releases -many sourced from analog tape no less — *and attributing their sound to the high-rez container format*. You’re saying they’re all wrong?

        Reply
        • Steven,

          I have already substantiated my thoughts on this matter through numerous posts. I have clearly stated the criteria that I believe are necessary to qualify as high-resolution audio. The BAS or you or anyone else can obviously establish your own and measure things differently. Every item in the list above (which I’ve referenced previously) is not high-resolution based on the provenance or production path used to create them. Every single SACD can be ruled out because they were all done using DSD 64, which is very good in the “audio band” only as stated by the folks at SONY. This means that they have the same frequency response as a good CD. They may sound different but they are no better (in stereo) than a traditional CD as far a fidelity goes. The few items on the list that are DVD-Audio release ALL came from analog tape. This is the limiting factor with regards to the PCM tracks used.

          You may not agree with my definition…and you are certainly entitled to your own opinion…but the works cited above guarantee that a realtime downconversion to CD quality would be no different than the “high-resolution” sources. And that’s why the BAS research is meaningless.

          CD can do a really terrific job of capturing audio fidelity. In fact, I agree that the dynamic range is sufficient to meet the needs of virtually every recording available. There are some (I possess a number of them) that exceed those specs. I believe that the added octave and dynamic range is important and can make an audible difference in the sound of recording. I’m hoping to do a proper study and establish that fact. We’ll see.

          Reply
          • Steven Sullivan

            ‘Two Against Nature” DVDA was actually an all-digital recording*, and I know for a fact (because I’m looking at the spectrum right now) that it has robust spectral content up to 24 kHz, with the rare spike above that, and low-level noise up to 48kHz (meaning, it was likely a 48kHz SR recording in a 96 kHz package). Not hi rez enough to be *true* hi rez though, right? Yours is a ‘No True Scotsman’ argument, really.

            (On ‘Everything Must Go’, SD went back to analog tape.)

          • Steven…you’ve identified one out of 19 albums that benefited from a sample rate of 48 kHz. If you feel that the Meyer and Moran paper resolved the issue of the perceptibility of high-resolution materials, then the debate is over for you. Even given that one DVDA disc or their sample exhibited marginal improvement on a spectrogram, I’m not yet convinced. I would like to ensure that there is a clear difference between ALL of the CD spec versions and the source high-resolution files or discs. This issue remains unresolved in my mine…and plenty of others.

  • Karl Vergarin

    The brain makes so many adjustments to incoming sound that this kind of test for subtle differences is impossible to make functionally objective. No matter how good your testing protocol, the sound is massively interpreted by a profoundly complex and constantly changing brain that is extremely good at creating meaning from small amounts of “data” but has no ability to compare any kind of data until it has coated it with meaning. At the level of cognition happening during these tests I believe that it is safe to say that the brain never does the same thing twice because it simply cannot.

    Any ABX test is, I think, subject to the inability of the brain to store data that is in any way separate from meaning. We can remember meanings – if they are sufficiently important – very well in a rough and practical sort of way but the original data that went into the production of that meaning is lost. I think that for a person to listen to 30 seconds of music and then immediately listen to it again is not very different, from a cognitive memory point of view, from looking at a full moon over Manhattan then going to San Francisco and looking at the moon there 28 days later. If you asked the moon-viewer to describe the differences in color, brightness and size you’d get answers that were as reliable as the average ABX audio test. (Perhaps there are some people whose brains do not function well at assigning meaning to data and who therefore have a somewhat better ability to perceive and remember data “objectively”)

    Reply
    • Very good points…I’m not sure how measurable this whole thing will be. I’ve talked with Kalman Rubinson about checking for brain changes from one flavor to another and had other discussion about brain wave patterns. We’ll see what happens…if and when I can get the funding to do this correctly.

      Reply
  • I don’t know if you’re monitoring the main debate thread over at avsform regarding the test you and Scott set up, but if not I suggest you look at them.. Unfortunately I think you have two bad choices: 1) let things continue as is, or 2)encourage Scott to moderate things to try to get back the the “fun and thoughtful” original intent.

    The first choice entails staying the course with rather increasingly heavy-handed tactics of the debate thread starter, which has degenerated into an increasingly strident set of rather arbitrary rules dictated by that thread starter, including but not limited to only allowing others to contribute data or tech comments (as solely determined by the debate thread starter). Any concerns or comments about logic, conclusions and the validity of scientific conclusions seem off-limits and reserved for himself. All of that perhaps wouldn’t matter if everything was still informal and fun, but the thread starter has already committed hard to the belief that the results are scientifically valid, that the differences result from the sample rate and bit depth difference alone (and not uninvestigated conversion artifacts, playback system quirks, sample differences, etc) and worthy of a paper. This all without even revealing the timestamps in the files where audibility was detectable so that his claims of causation to be solely bit depth and sample rate can go unchallenged. As of a few hours ago, he’s even started putting people on a 5 post probation to meet his standards.

    Among the traps for you in the first choice is that the debate thread starter has set you up if the little house of cards he’s created does collapse – he asserts strongly that he can depend that the sole differences between the files are sample rate and bit depth – and he states that because he asserts that you and Scott represented those were the sole differences. So if and when a conversion artifact or other unexpected difference is eventually discovered to explain audible differences, guess who will thrown under the bus by the thread starter – Scott and especially you. And you won’t deserve it, because you never represented this test as something that was rigorous enough for the kind of claims the thread originator is so desperately striving for. You may not put much stock in the Meyer & Moran study, but your comments on it to date have been measured and reasonable. But the thread starter has already gone all-in that this little test already refutes that study. Given the relative rigor of M&M and this informal test, if he repeats that far and wide, you may be tainted with the obvious ridicule it deserves by people who know nothing other than the peer review process of publications and how to create studies that conform to the scientific method.

    The second choice is no fun, either. The debate thread starter doesn’t like to be corrected, uses facile terms to put in an apparent air of openness and tolerance before shortly revealing otherwise, and seems to feel pretty confident he has true believers in his camp to push ahead with his own agenda. He’s clever and certainly vicious, and has even created a parallel thread on his own website to stir up his stalwarts and prepare for the contingency that his antics at avsforum wear thin.

    There could still be fun, interesting and thoughtful things to learn from your test files – it seems some kind of audible difference has been reliably detected by a few. But looking into those differences in a fun, efficient, respectful way just won’t happen with the status quo. Your effort has been co-opted by someone with an obvious agenda, and I hope you can reclaim it.

    Reply
    • Let me finally add that the good news in all of this is that there is tremendous passion for what you are doing (a necessary but not sufficient-by-itself condition for profitable work), and that’s so much better than if no one cared (business buzzkill)!

      Reply
    • Thanks for the update…no I haven’t been over to AVS Forum to monitor any of the comments. There are certain alpha dogs over there that are a turn off to me. I will contact Scott and see what our next step should be.

      Reply

Leave a Reply to Steven Sullivan Cancel reply

Your email address will not be published. Required fields are marked *