Experimental brain-controlled hearing aid can pick out voices in a crowd


Experimental brain-controlled hearing aid can pick out voices in a crowd

 

By SHARON BEGLEY MAY 15, 2019

Buzz buzz secret hmmm hmmm don’t tell anyone garble garble layoffs …

The brain is unsurpassed in its ability to pick out juicy tidbits and attention-grabbing voices against a cacophony of background noise. Hearing aids, however, stink at this “cocktail party effect”: Rather than amplifying a particular voice by selective attention, they amplify every sound equally.
On Wednesday, researchers unveiled a possible solution — an experimental hearing aid that reads the mind. It uses artificial intelligence to separate the sounds of different speakers, detects brain activity that makes one of those voices stand out from the others, and amplifies only that voice before delivering the sound to the listener, they explained in Science Advances.
If the technology proves practical — and for that it probably can’t require implanting electrodes on the surface of the brain, as the current version does — it could serve as the basis for a brain-controlled hearing aid that would let people with hearing loss function better in social settings as well as in the noisy world.
The project, led by electrical engineer Nima Mesgarani of Columbia University’s Zuckerman Mind Brain Behavior Institute, is one of many trying to make hearing aids more like normal hearing. The $500 Bose Hearphone app for smartphones, for instance, has directional mics so users can hear one person better than another, plus controls to dampen, say, traffic noise. But no current device can amplify selected conversations from multiple sources in a crowd, as the normally hearing brain can.
“Even the most advanced digital hearing aids don’t know which voices they should suppress and which they should amplify,” Mesgarani said.
If they did, it would make a major difference to people with impaired hearing, said Roger Miller, who directs the neural prosthetics program at the National Institute on Deafness and Other Communication Disorders, which funded the study. “There is real gold to be mined in that hill,” he said.
Mesgarani started his mining in the brain. He and his graduate adviser discovered in 2012 that when people converse, the listener’s brain waves echo the acoustic features of the speaker’s voice, turning up its perceived volume and filtering out extraneous voices.
That ability comes from the brain’s secondary auditory cortexes, one behind each ear. They amplify one voice over others by the simple means of paying attention, in a process called top-down control. (“Top” means an executive function such as conscious attention; “down” means a sensory function, in this case hearing.) The sound of a familiar voice, a familiar word (one’s name), an emotionally resonant word (divorce) or tone, or other attention-grabber causes this region to increase the perceived volume of what grabs its attention.
The brain-controlled hearing aid first separates the audio signals of different speakers. It then determines the spectrogram, or voiceprint, of each, meaning how a voice’s volume and frequency vary with time. Next, it detects the brain waves in a listener’s auditory cortex (via an implanted 16-by-16 electrode array), which indicate what voice the listener is paying attention to. Finally, the system searches for that particular voice and amplifies it, and only it. When the listener’s attention turns to a different voice, the system quiets the first one and dials up the volume on the new one.
Three patients with epilepsy who were undergoing brain surgery volunteered to let Dr. Ashesh Mehta of the Northwell Health Institute for Neurology and Neurosurgery on New York’s Long Island implant the electrode array in their brains. The electrodes detected brain activity that occurred when the participants listened to either of two speakers talking at once, focusing first on one and then on the other, as directed by the scientists. The scientists detected the unique brain activity corresponding to paying attention to each voice.
“The brain waves of listeners tracked only the voice of the speaker they’re focusing on,” Mesgarani said.
This research is another in a growing list of studies that tap the brain’s activity in order to produce an output that the body can’t otherwise manage, such as a paralyzed person moving a mechanical arm or someone with ALS turning thoughts into speech.
To find widespread use, the mind-reading hearing aid would have to work via electrodes on the scalp. The Columbia team is working on the scalp version, as well as one with electrodes around the ear.
Their earlier mind-reading hearing aid worked only on voices it had been trained to recognize, such as those of family members. It could detect and amplify those voices but not unknown ones. The next-gen device “can recognize and decode a voice — any voice — right off the bat,” Mesgarani said.

Comments

Post a Comment

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger