Dr. AIX's POSTSNEWS

AES Report: Friday Part I

Last Tuesday, the local chapter of the Audio Engineering Society held their monthly meeting at the university where I teach. I can never make the monthly Tuesday evening meetings because the conflict with my evening classes. So they brought the meeting to me. I was the featured speaker and talked about high-resolution audio. The title of the presentation was “High-Res Audio/Music: More Fidelity or Marketing Hype?” If you’re a regular reader of this site you already know the nature of the discussion. I think I opened some ears and minds to the gross misrepresentations being made by the music industry.

I reconnected with some old friends, a few former students, and was introduced to a number of new people in the industry. As a result of my presentation, I was provided access to the 2016 AES Convention and the conference Audio for Virtual and Augmented Reality, a separate event held concurrently with the traditional AES event. It was organized by my friend Andres Mayo and Linda Gedemer. You can view the conference program by clicking here.

So on Friday morning, I headed east of the San Diego freeway and planned a whole day around the AES convention and the AVAR conference. I had a very full day and didn’t return home until after 11 pm — a rare thing for me. But the day was incredibly engaging on a variety of fronts, which I would like to share over the next couple of posts.

My time was focused on the main presentations — tutorials as they called them — held in the main auditorium. Just outside the venue sponsors of the event were demoing hardware and software targeted to the production community — especially those interested in grabbing a portion of the estimated 120 billion dollar market that is expected to develop over the next 3 years! There are already over 4000 production companies working in the VR and AR space. And they are all scrambling to learn about immersive sound production, postproduction, and delivery strategies.

I put on video headsets and headphones and listened carefully to the demos of Dolby Atmos, Ambeo, Occulus, Audiokinetic, Visisonics, Gaudio, and others. Honestly, I was underwhelmed with both the visuals and audio. But the audio fidelity was really dreadful. There are experiments being done with Ambisonics, multiarray microphones (with up to 64 microphones placed around a sphere), and plug ins for Pro Tools that allow engineers to dynamically pan “sound objects” to visuals.

From what I heard in the tutorials (which were admittedly compromised because they came through the PA system) and other demos, VR and AR audio have a very long way to go. I’ve heard more compelling trackable immersive audio from the guys working in the studio next door. They have developed an omni binaural microphone system that can be attached to a VR camera setup to capture live sound. I’ve heard some pretty good audio from them.

There are so many different aspects to VR and AR audio. A major area of attention in gaming and I’m sure that gamers will be the first market to get upgraded sound. However, bringing music into the VR and AR space is going to be a much larger challenge. Is it enough to set up a Soundfield microphone in the midst of a live performance and hit record? I’ve heard audio done with this approach and it was far from satisfying. But is it appropriate to deliver the standard CD stereo mix through a pair of headphones while the video allows 360 panning?

To be continued…

Dr. AIX

Mark Waldrep, aka Dr. AIX, has been producing and engineering music for over 40 years. He learned electronics as a teenager from his HAM radio father while learning to play the guitar. Mark received the first doctorate in music composition from UCLA in 1986 for a "binaural" electronic music composition. Other advanced degrees include an MS in computer science, an MFA/MA in music, BM in music and a BA in art. As an engineer and producer, Mark has worked on projects for the Rolling Stones, 311, Tool, KISS, Blink 182, Blues Traveler, Britney Spears, the San Francisco Symphony, The Dover Quartet, Willie Nelson, Paul Williams, The Allman Brothers, Bad Company and many more. Dr. Waldrep has been an innovator when it comes to multimedia and music. He created the first enhanced CDs in the 90s, the first DVD-Videos released in the U.S., the first web-connected DVD, the first DVD-Audio title, the first music Blu-ray disc and the first 3D Music Album. Additionally, he launched the first High Definition Music Download site in 2007 called iTrax.com. A frequency speaker at audio events, author of numerous articles, Dr. Waldrep is currently writing a book on the production and reproduction of high-end music called, "High-End Audio: A Practical Guide to Production and Playback". The book should be completed in the fall of 2013.

One thought on “AES Report: Friday Part I

  • Soundmind

    One of the most elusive and desirable characteristics of the sound of music is immersion. Whether you are an acoustician designing a concert hall or an engineer designing an audio system this is a very difficult thing to do. Acoustic scientists call it “Listener Envelopment Value” or LEV. It has been the goal of many efforts, some in designing spaces like Boston Symphony Hall successful, others designing sound systems to create it all failed.

    In 1974 quite by accident I invented a way to mathematically model acoustic fields with great accuracy and precision. I call this model Acoustic Energy Field Transfer Theory. This led to a way to construct them. I call that machine an Electronic Environmental Acoustic Simulator. A very simplified scaled down compromised and adapted version of it is contained in my US Patent 4,332,979 now long expired. If you read the patent, know that the initial part of it related to a method of measurement and was therefore considered a separate art and not in the scope of the patent. However, the first few sketches of that aspect got left in so if you read it, ignore them because without the narrative they will be incomprehensible and irrelevant.

    Two kinds of people, one exemplified by Leo Beranek an acoustician and another exemplified by Floyd Toole a hi fi marketing researcher discovered that early lateral reflections play an important role in the sense of envelopment. This explains why fan shaped concert halls are unsuccessful. But that is far from all of it, there is much more to it than that. The failure of binaural recordings played through headphones to recreate the sense of envelopment proves this cannot be achieved with headphones and we’ve known why for well over 50 years.

    The mathematical model dealt with the physics of sound. Here’s a clue. At a musical performance indoors anywhere, put your ear up to a wall and what will you hear coming from the wall? Nothing. And yet in most indoor spaces most of the sound the audience hears is due to reflections, 90% or more of it. The field is so diffuse that reflected sound from any one direction is below the threshold of audibility. It is their aggregate that creates the preponderance. No one should get the idea that this reflected field is isobaric, that is directionless. Exactly the opposite is true, it is made up of very large numbers of directional components each one having a specific loudness, direction of arrival, time of arrival, and spectral change relative to the first arriving sound. The specific details define the unique acoustic space and the relationship between the source at one point and the field heard or measured at another.

    The machine allows many of these variables to be manipulated so that countless acoustic effects can be created even in the simplified version. But the process without measurements and standards requires a great deal of trial and error and becomes a learned skill. No two recordings are alike. Recordings do not contain vital information about the acoustics of space because when a sound field is converted to an electrical signal by a microphone, a vector field is converted to a scalar having amplitude versus time but no direction. Loudspeakers convert the scalar signal back to vector fields and in any current technology you can buy they have virtually no relationship to what would be considered desirable or even plausible live.

    One thing you can bet on is that in a typical listening room, speakers aimed at the listener intended to recreate this field inevitably will fail. The reason is that as you half the distance between you and any speaker, its sound level at your ears increases by 6 db and so unlike the experiment listening to the reflections from walls, the source of this sound is easy to identify, that is to pinpoint by aural cues alone and the sense of envelopment is gone. This is why none of these systems will work.

    BTW, it became clear to me a long time ago, EEAS has no commercial possibilities. There was no interest in it by anyone who had the resources to risk developing the technology into viable products. The requirements of the listening room are also restrictive. Symmetry is one criteria which eliminates L shaped living room/dining rooms. Large openings into other rooms such as archways are also out. The only working prototype as far as I know will remain a one of a kind curiosity.

    Reply

Leave a Reply to Soundmind Cancel reply

Your email address will not be published. Required fields are marked *