You’re in a noisy room filled with voices. Yet you’re able to shut out almost all of them, focusing only on the one person you’re listening to.
To study this capability, UCSF neurosurgeon Edward Chang and his team took advantage of one of the rare opportunities researchers have to watch the human brain in action: during brain surgery on a patient with severe epilepsy.
Epilepsy surgery involves several stages. Before operating, neurosurgeons must identify where seizures are taking place, while also mapping out parts of the brain that are critical to functions like motor control and speech. To do this, surgeons remove the skull and place a grid of electrodes directly on the brain’s outer surface, or cortex. The electrodes record activity throughout the brain in real time.
Patients (who are conscious, believe it or not) are then asked to perform specific functions like counting to ten, and the electrodes tell scientists where in the brain the processing is taking place. Often patients allow neuroscientists to tack on a couple of extra exercises, purely for the sake of science. That’s where Chang and his team came in.
They wanted to study the cocktail effect: that ability to filter out non-essential voices, focussing on just one. To do this, they played the patients a clip of overlapping voices. Like this one…[audio:http://ww2.kqed.org/news/wp-content/uploads/sites/10/2012/04/selective.mp3|titles=selective]
They then asked patients to pull out a specific piece of information, such as: “Where is Tiger going?” This forced the patients to focus on one voice, filtering out the other. Try it now…[audio:http://ww2.kqed.org/news/wp-content/uploads/sites/10/2012/04/selective.mp3|titles=selective]
(The answer: Tiger is going to “Blue 2.”)
Using the electrodes, Chang says, “we could see at the millisecond level when they were switching from listening to both voices, and when they were tuning into one and — at the same time — tuning out the other.”
One surprise was where in the brain this filtering takes place.
“Traditionally,” says Chang, “we’ve thought that it happens at a much higher cognitive level.” But in fact the processing takes place in the auditory system, closer to the ear, a part of the brain that had not been thought to perform complex functions.
Chang hopes that this work can one day help find treatments for the type of hearing loss that occurs in old age. He says the work may also be useful to people who design voice-recognition systems for cell phones and other devices, which often falter in noisy environments.
The findings, published Wednesday in the journal Nature, remind Chang of the amazing complexity of the human brain and how it has adapted for social behavior.
“It says that the brain does something remarkable in that it’s not just a passive receiver of information coming into the ear. It’s constantly and actively working on that information to extract the information that we need for our behavior.”