Scientists Develop Method to Convert Brain Signals into Speech

Many people – including some of our clients – have lost the ability to speak following accidents, injuries such as strokes, or neurodegenerative disorders. Scientists have recently developed a method that could dramatically impact the way these people are able to communicate.

People who have lost the ability to speak as a result of injury or disease are often forced to use painstaking means of communication using small physical movements, such as head or eye movements. A famous example, physicist Stephen Hawking, used a muscle in his cheek to type keyboard characters which a computer then generated into speech. Now, scientists have developed a way to use brain signals to program a computer to mimic natural speech. As reported recently in Nature, scientists have developed a system that decodes the brain’s vocal intentions and translates them into speech. The hope is that one day, this technology could be used to help people who cannot speak.

Previously, researchers have been able to decode brain signals that indicate the recognition of letters and words (sound representations), but those approaches were not as fluid or fast as natural speech, only producing speech at a rate of about eight words per minute. The new system works by deciphering the brain’s motor commands that guide vocal movements during speech – tongue and lip movements – and generates intelligible sentences that approximate the individual’s natural rhythm of speech. This new system, which represents a leap from decoding single syllables to sentences, is able to produce about 150 words per minute, the natural pace of speech.

In researching this new method, participants were implanted with electrode arrays, which are stamp-sized pads containing hundreds of electrodes placed on the surface of the brain. Each participant recited hundreds of sentences, and the electrodes recorded the firing patterns of neurons in the brain. The researchers associated those patterns with the subtle movements of the participant’s lips, tongue, larynx, and jaw that occur during speech. The team then translated these movements into spoken sentences. Simply mimicking the act of speaking provided the computer with enough information to recreate several of the same sounds.

The researchers then had people listen to the virtual voices to assess the fluency, and found that approximately 70% of the virtual speech produced was intelligible. The study showed that the speech decoder works with mimed or mimicked words, but it is still unclear if it would work with words that people only think, without moving their mouth. The team is planning to move to clinical trials to further test the system.

The team also found that a synthesized voice system could be used and adapted by someone else, suggesting that an off-the-shelf virtual voice system could be possible one day. The field of brain-machine interface technology, as it is known, is rapidly advancing, with teams around the world adding refinements that could be tailored to a specific injury.

 “With continued progress,” wrote Chethan Pandarinath and Yahia H. Ali, biomedical engineers at Emory University and Georgia Institute of Technology, in an accompanying commentary to the study, “we can hope that individuals with speech impairments will regain the ability to freely speak their minds and reconnect with the world around them.”

Share this article

Andrea Donaldson

Publications


Posted Under

Archives

Archives

Recent Posts

Categories

Categories