I read it again this morning on the Audiostream site. In the interviews that Neil Young did with both Michael Lavorgna and Chris Connaker (of Computer Audiophile) he talks about using the various sample rates in the studio as if they were production tools. Here’s the wording in the Audiostream piece:
“NY: I personally look forward to sampled records and hip-hop that suddenly realize hey we have a new thing to go with. Now we can be like lo-res for the beat and then we get to the hook we’ll go to high res. There’s all kinds of things you can use this for. You can have a 192 recording and play back that’s based on 16/44 for several of the instruments and then when you get to the hook it goes to 192. That can happen and be an effect, its part of the musical palette. Its part of the musical thing.”
And here’s essentially the same comment from CA:
“NY: All we’re doing is saying, in the studio today make your digital music in whatever resolution you want to make it at. We’re going to say what it is on our player you’ll know. It will be there somewhere. People will learn when they listen to things. When it sounds great they’ll get curious. They’ll want to know what it is. Some of them may some of the may not. They’ll choose to take a look. A go, look at that, I love this, and it’s 192. It’s one of three things I have that are 192. All the others are lower res, some are 48, some are 96. They may, in their mind, go “oh shit” this is what it sounds like at 48, really great. I wonder what it would have sounded like at 192. The awareness of those differences and the palette musicians have to play with will change. Producers will now be able to use resolution as an effect. It can be super clear if you want that. Or, it can be dull if you don’t want that. Even within one recording you can go from low res to high res. You can use it as a tool. You can use it creatively. You can turn it on and off. The whole recording will have to be presented at it’s highest resolution. But if the chorus and the hook are at 192, and the rest of the song is at 44.1 or 48, something compatible, then it’s mixed at 192. The source was low res, the chorus was super high res, some of the vocals are really high res, some are dull. It’s a new way to play. A whole new thing. That kind of creativity in the studio is possibly a new tool for the hip hop and rap community.”
As an audio engineer, this kind of thinking is clearly outside of the box…and actually outside of any reality I can imagine. First, no one is going to use sample rates as some sort of “equalizer” to adjust the clarity of section or instrument. We have real equalizers and use microphone technique to accomplish that. There are production tools that let engineers adjust virtually every aspect of the sound coming from the musicians…and I would venture that changing the sample rate to a desired sonic effect is not going to be one of them.
Just how would this work? When a session is first started, the original setup of the Pro Tools session establishes the sample rate and word length (all PT sessions are done using PCM). The addition of loops, beats, live drums, basic rhythm tracks and everything else happens to a single session AND a single session can’t contain different sample rates or word lengths. It would be possible to use samples from lower fidelity parts and record them on the high-resolution session using sample rate conversion or more likely analog transfers.
I think Neil is off base with this one…and it makes we wonder whether he really knows much about sample rates and the actual “audible” differences between them. Could this be a case of “too old to rock ‘n roll”. Sample rates are not going to be used as “production tools”!
It would have been nice to have one of the interviewers ask him more about this. Didn’t happen.