DTS Studies Brainwaves to Prove Sound Really Matters

It’s been said that the sound associated with watching video is “half” of the experience. But is it really? Or is it actually more than half? Or less?

Answering this question was the goal of a clever study recently commissioned by DTS, with an eye toward promoting its new DTS Headphone:X technology. For those unfamiliar, Headphone:X has been at the heart of one of the more impressive CES show demos for the last two years running. By encoding multichannel soundtracks using DTS-HD embedded with environmental/room information (real or virtual), then performing a companion decode at the playback end, Headphone:X is able to create an uncannily lifelike recreation of an in-room surround-sound system through a pair of standard stereo earphones. I know—this claim has been heard before from other companies. But I don’t think I’m alone among the press in saying that the DTS demo (admittedly, perfectly optimized in an enclosed demo space created for the show) was a cut above.

Headphone:X is about to start appearing now in mobile audio/video products that contain the decode processing (a smartphone for the Chinese market is the first), and DTS is out to prove to music and video streaming services that a small investment in additional bandwidth required to encode soundtracks with Headphone:X will pay off big for listeners—even more than investing the greater bandwidth required to upgrade from, say, standard def video to high def video.

To conduct the study, DTS brought in the independent market research firm Neuro-Insight, which specializes in—no joke here—measuring consumer brain response to various products and stimuli. The company literally puts a headset with electrodes on study participants, then uses the established knowledge-base of brainwave mapping that’s come out of academic and commercial research (some of it their own) to establish whether, for example, an advertisement or specific details of it have deeply registered in the area of the brain associated with long-term memory.

For the DTS study, Neuro-Insight recruited 107 participants and placed upon each of their heads a cap loaded with 28 sensors (saline-dampened felt pads help with conductivity). In their ears were placed a pair of the same commonly found, low-end ear buds, and in front of each sat a 10-inch tablet used as the playback device. Each participant was then played the same four video clips, a mix of music video and movie-style live-action content. Video quality for all participants was varied randomly from relatively poor 240p resolution to 480p standard definition to 1080p high definition. Half of the participants—the control group—listened to their video with standard stereo sound at low-fidelity 96 kilobits-per-second resolution. The other half were fed 256 kbps Headphone:X audio.

The Neuro-Insight team then went about measuring and recording two key real-time brainwave indicators. One is dubbed Global Memory Encoding, which measures the creation of long-term memories that can later affect decisions. To quote Neuro-Insight CEO Paranav Yadav, “Global memory assesses memories created for emotional or holistic features — such as themes, storylines, music—which is why we reported on this metric for this study.”

The second metric, called the Hedonic Index, is actually the more revealing here. It directly registers the strength of the pleasure response in the brain, Yadav says. “The patterns of brain activity the Hedonic Index measures are similar to those seen when we eat something we find enjoyable, hear a funny joke or, for that matter, are anticipating the “high” associated with an addictive drug,” he explains. “The phrase ‘likeability’and Hedonic Index can be used interchangeably.”

ARTICLE CONTENTS

COMMENTS
javanp's picture

For me, I was most impressed to hear that headphone:x is only 256kbps and that it can be used with ear buds--I thought special rigs were required.

Rob Sabin's picture
My understanding is that any set of earphones will work with Headphone:X if the content is first encoded and then properly decoded by the playback device. However, one of the interesting benefits of the technology is that specific popular earphone models can be optimized by DTS for a more convincing experience and the profiles for those phones could be made available as a menu option in the player. Also, the Headphone:X encoding need not be arbitrary to create some generic acoustic space on playback; content creators can decide if they want the playback to sound like a specific hall or studio space. We'll know more about how the technology is realized when we see products in the U.S. and get to test them out. I'll be anxious to see if and when Headphone:X decoding turns up in AVRs as well. It should be a no-brainer to add the encode to DTS-encoded Blu-ray soundtracks, which would then allow late-night viewers to closely replicate the sound of their home theater in headphones without disturbing the family or neighbors. The CES demo for this pulled off that trick very, very well.
dustyjohnson's picture

Sound impact to me has always been a huge part of multimedia. So many times, we use to capture the scenario by catching the its sound, and that's proved here but through typically performed experiment. What else the result could come.

aleksandr's picture

with a 4k TV you have to seat very close to see the difference. But this is no recommended for optimal sound image and clarity. what is more important? resolution or audio quality?

X