Tech Spotlight: SRS & The Future Of Surround Page 2

So, we wanted to define a syntax describing how the audio objects exist in three-dimensional space and how they’re behaving. Some of them may be stationary and some may be moving, but, essentially, rather than mixing to specific speaker locations, you’re mixing into a three-dimensional space without regard for how many speakers might be used later. Once you’ve retained that information, you can use that to render to any arbitrary speaker configuration. You would be able to take information from the creative community that says, “We want this object here, that object there,” or, “We want reverb to come from here”—and adaptively map that to whatever playback resources are available based on what the renderer knows about the playback environment. So if you have two speakers in a TV, fine, we’ll do some virtual processing or some psychoacoustics using that information. I may decide, as a home theater enthusiast, that I want 33.3 channels. Well, because the mix is in space rather than a particular channel configuration, I can do that, and basically I’m able to achieve more resolution. Someone heard us talking about this and said, you know, the way you’re describing this, it’s as though speakers are like pixels. The more speakers you have, the more resolution you have, and the better ability you have to place things in space.

RS: So then, in essence, if you’re the guy mixing a movie, you end up with a three-dimensional sort of grid in front of you? And rather than thinking in terms of the actual speakers, you’re just programming, or placing things to move through this space?

AK: Yes. We’ve actually done a mix like that as a proof of concept. And it’s very interesting, because when you’re a mixer, you have to quickly forget about the idea that you’re mixing to a particular speaker or target channel and get used to the fact that you’re actually placing and creating a threedimensional audio experience, regardless of how many speakers there might be later or where they are. It’s almost like you’re putting a scrim over the speakers.

RS: OK, so what does that then mean for the listener if they walked into an MDA-equipped theater? What would they hear?

AK: Having this kind of information just opens up the door for development of all kinds of techniques for rendering a soundfield that would not even be considered if you’re just limited to [amplitude-only driven] speakers around the edges of the room. For example, when you were here, you heard a demonstration of our CircleCinema 3D depth-rendering technology. It creates this feeling of depth—objects moving toward you and away from you, receding from you. You can’t do that with regular surround—you have speakers behind you and in front of you, but there’s no speakers close up to your face. Now, with that demo, you only heard an approximation of where we think things should be. But with the full evolution to MDA, the creator can tell us, say, we want objects panned not just around a twodimensional plane, around the edges like you can do with traditional surround, but also around interior points, immersive points. Even with 7.1 today, the sound is still in a two-dimensional plane, and it still clings to the speakers and to the walls—you don’t get the feeling that you’re immersed in the soundfield. So, by applying different approaches, different technologies, there’s a lot of things that can be done to create a more immersive and involving sound experience.

Two-Speaker Surround
RS: I listened to the 2.1-channel demo of CircleCinema 3D and found it engaging in ways I’ve never heard from any traditional 5.1-channel system. How can a two-channel system possibly be better?

AK: Well, the basic concept is that the hearing system is a very sophisticated thing, and not limited to simple amplitude panning—not limited to location based on whether the sound is louder in one place and quieter in another. There are a lot of mechanisms at work. So why not use what we know about the hearing system to create this perception of three-dimensional space and emergence without a lot of boxes hanging around the room? That defines the whole theory of psychoacoustics: Let’s use the hearing system directly, rather than indirectly with speakers all over the place.

Now, it turns out that one of the techniques for projecting sound into space based on the auditory system is something called HRTF, or head-related transfer functions, where the frequency or spectral characteristics of a broadband audio signal, like speech or music, will vary depending on the angle relative to the ear canal. And that’s because of the structure of the head and the outer ear, and the shoulders—everything. And by understanding how that changes, we can take advantage of HRTF to create sounds in three-dimensional space, from a perception standpoint, that aren’t actually coming from speakers.

The next thing that you heard with CC3D was another psychoacoustic phenomenon that we kind of discovered last year about what sounds do when they come closer versus moving farther away. And we found that we were able to simulate something that normally can’t be done with traditional surround sound, which is proximity. Obviously, it happens to us every day in real life—if I walk closer to you, you can tell that my voice is coming from closer to you; if I walk away, you can tell I’m walking away. And again, that’s not just amplitude. So we’re taking advantage of what we learned there to create this feeling that things are being projected into space in the D axis, the depth axis.

So, when you have this two-speaker concept, this minimal-number-of-speakers concept, you can create a much more immersive soundfield because it’s matching playback to the human ear-brain system, to the perception system, in a much better way than you can when you’re just simulating surround by putting speakers around the room.

RS: Are you saying that in the ideal surround sound world, the only time you’d have more than a pair of speakers up front is when you were maybe just needing to fill the back part of a large auditorium or listening room?

AK: Almost. The other reason is to sometimes put something directly behind you, rather than having only this kind of immersive field of depth projection [from the front]. Most of the stuff that occurs when you’re watching a movie or TV is occurring in front of you, though occasionally there are things that kind of fly in from behind you.

So what we’re working on now is essentially a multichannel or 5.1 version of the twochannel system that you heard. You can have speakers back there, as long as you treat all the speakers with the same kind of technology to maintain this immersion.

But the main thing is, the potential for exploring these kinds of techniques is really unlimited. I mean, look, I think what you heard is pretty good—it impresses me and a lot of people. But there’s really no limit—we can go well beyond that, even. And maybe we will, or other people will.


Colin Robertson's picture

Awesome article guys. I genuinely hope this is the future of how surround sound is created and decoded. Let US decided how many speakers we want or need dominating our rooms!

This could solve the problem of all the different surround speaker placement recommendations we have now. Use as many or as few speakers as you would like, or for audiophiles, as you need according to your environment. This may piss off people who like to adhere to a universal standard and home theater buffs who've already invested so much in accommodating to those standards. Yet, this system should work in those situations, right?

Let's face it, you can have wonderful sounding "surround" from two speakers now, but it has to work in a real world scenario where people don't have a sweet spot and perfectly symmetrical room to work with. I would only hope this type of system will eventually accommodate for these differences.

It's not discussed in the article, but I imaging the natural extension of this is to program your room into the program so it can account for speaker placement and acoustical differences that occur in peoples homes. I would imagine building a simple 3D model of my room so it can work around different room shapes and any potentially acoustically affecting surfaces such as a coffee table or prominent column. You could even program which surfaces are reflective, absorptive, diffuse and so on...

Or even better, a sensor like the Microsoft Kinect could scan your room visually to automatically build a 3D model of your room and then use a microphone to auto-tune for acoustics much in the way a receiver does now with an Audyssey mic. It could even sense how many people will be viewing and where they are sitting before the movie starts to automatically widen or narrow the sweet spot!

Rob Sabin's picture
Some astute comments, Colin! You're right that it didn't get mentioned in the article, but in my interview with Alan Kraemer we did discuss how an MDA-based system in a receiver, for example, could make use of the information gathered by an Audyssey-like set-up and room correction algorithm to more fully optimize the surround experience. Keep in mind how much information the most developed of these systems can acquire these days. That includes not only determining how many speakers you have and where they are, but all kinds of information about the room boundaries and the behavior of sound coming from the speakers at multiple locations. Even today, Audyssy's top of the line MultEQ Pro system can take measurements from up to 32 listening positions and determine each from the next. There's a lot of potential information to feed an MDA based system to help fine-tune depth rendering and spatial enhancement.
kdttaz's picture

I also thought this was fascinating and would seem to hold great potential. It also reminded me of one of the technologies from Daniel Suarez's "The Daemon" - HyperSonic Sound - in terms of 3D placement, but without the directionality.

Daffy's picture

Heya Colin,

We have a tech that attempts to remove the need for a sweet spot and isn't room dependant, aimed at stereo, surround and virtual surround field technology. Give it a read if you like.