The title of today’s article doesn’t actually reflect my own experience and position, but the author of a blog post with this title did alert me of his recent blog post. I was intrigued and made my way over to his new blog site to check it out. I was perfectly willing to take his test and see if I could tell the difference between HD-Audio vs. AAC 256k VBR…one of his tests. I’ll provide a link to his page at the end of the article so you can check it out yourself. But read the entire post before you do…the test is another example of a fatally flawed test (just like the Boston Audio Society research project). I’ve been in touch with the author and we’re going to sort out how to improve his approach.
According to his website, he’s a science teacher in Barcelona, Spain and is very passionate about music. However, I’m not sure how much of an audio engineer he is because he displays spectrograms on the site and talks about polarity etc. , but he didn’t recognize his selected source file as an original DSD recording that had been converted to a 192 kHz/24-bit PCM file…along with all of the ultrasonic noise! He believes, “The signal we start with is an HD audio file of the highest possible quality, sampled at 192 kHz and 24 bit.” I was immediately suspicious about this.
So what’s the problem? I clicked to the test page and chose to download the files rather than stream them through my computer and so that I could confirm the provenance of the samples. Gabriel, the author, states, “In the following section you will be asked to compare Master quality audio to lossy iTunes Plus. For the test, I have used master audio quality tracks you can legally download for free from the Internet. Then, I have compressed sections of the originals to AAC 256 kbps using iTunes encoder (iTunes Plus option). Finally, I have edited sample tracks that consist of two sets of about ten short sections of the original and of the compressed tracks, placed side by side so that they can be easily compared.”
Once again, the provenance of the original recordings is suspect. We don’t actually know if the source files are, in fact, high-resolution audio. Just because someone says that they are and the sample rate is 192 or 96 kHz, doesn’t mean that they are high resolution. This is the same mistake that the BAS researchers fell into.
I played all of the files and I sure couldn’t tell the difference between the A and B sections (which were painfully short…the best way to do this comparison is to be able to switch instantly between the A and B versions). As I looked at the spectra of the a few of the files, it became painfully clear why no one would be able to detect a difference…they were virtually identical! Here’s a plot from one of the classical selections:
Figure 1 – The spectra of the Vivaldi “Spring” segments A and B. Notice they are virtually identical. [Click to enlarge]
This is a prime example of why it is so important to get meaningful information about the perceptibility of high-resolution audio. This type of causal experiment causes more confusion than it provides information. The people that took the test and couldn’t tell the difference don’t know that the stuff they heard wasn’t accurately testing their hearing…or the perceptibility of HD audio over compressed audio.
That means that there are new audio enthusiasts that will go around claiming that high-resolution audio is “irrelevant”, when we really haven’t put it to a rigorous test. The jury is still very much out.
Here’s the link if you want to visit the site for yourself.