A device that is attached to the person converts signals from the brain to the vocal tract directly into words that will then appear as text on a screen.
In a scientific breakthrough, researchers at UC San Francisco (UCSF) have successfully developed a tool called "speech neuroprosthesis." Every year, thousands of people lose the ability to speak due to stroke, accident, or disease. This tool has enabled a man with severe paralysis to communicate in full sentences through text that appears on a screen. It was tested for the first time on a 36-year-old man diagnosed with post-stroke anarthria and spastic quadriparesis and was a success. The results were published in the New England Journal of Medicine.
Breakthrough Technology: @UCSF researchers, led by neurosurgeon Edward Chang, have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate. https://t.co/xH3Yup6xat #ucsfweill— UCSF Neurosurgery (@NeurosurgUCSF) July 14, 2021
What sets this method apart from those that were already in existence is that communication techniques were focused on restoring communication through spelling-based approaches that typed out letters individually in text. But this research, led by neurosurgeon Edward Chang, MD, and the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, translates the signals intended to control muscles of the vocal system for speaking words, rather than using signals to move the arm or hand that enables typing. What it essentially means is that signals from the brain to the vocal tract are converted directly into words that will then appear as text on a screen.
“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” Chang said, according to a press release from UCSF. He also noted that spelling-based approaches that use typing, writing, or controlling a cursor are slower and more laborious alternatives. “Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.” A subdural multielectrode array was attached over the sensorimotor cortex that controls speech in a person with anarthria. Using deep-learning algorithms, they were able to create computational models to detect and classify words from patterns in the recorded cortical activity over 48 sessions and 22 hours of cortical activity.
"This is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak.” In @NEJM, @UCSF neurosurgeon Edward Chang & team describe a new technology for restoring speech: https://t.co/iBNCgS9lBm #ucsfweill pic.twitter.com/z8xnkwn8qL— UCSF Neurosurgery (@NeurosurgUCSF) July 15, 2021
The research has been going on for over a decade and was concluded with the first participant of a clinical research trial. “To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” Chang stated. “It shows strong promise to restore communication by tapping into the brain's natural speech machinery.” This new approach could one day enable people with a speech impairment to fully communicate. To investigate the potential of this technology Chang partnered with colleague Karunesh Ganguly, MD, Ph.D., an associate professor of neurology.
Groundbreaking “neuroprosthesis” with the promise to transform the quality of life of patients experiencing speech loss. Kudos to the patient BRAVO1, @ChangLabUcsf, @NeurosurgUCSF, and all their collaborators with this pioneering work. https://t.co/AcwkAS7uc4— Andy Lai, MD, MPH (@andyrlai) July 15, 2021
Chang and Ganguly launched a study known as “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice) and recruited the assistance of a man who had suffered a devastating brainstem stroke more than 15 years ago. The stroke had severely damaged the connection between his brain and his vocal tract and limbs. The man wanted to be referred to as BRAVO1 and used a pointer attached to a cap to point at letters to form words for communication. After the multielectrode array was attached he attempted to speak a selected number of words and the signals were recorded.
“We were thrilled to see the accurate decoding of a variety of meaningful sentences,” David Moses, Ph.D., who helped translate the speech signals to full words said. “We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings.” They were able to decode sentences from the participant’s cortical activity in real-time at a median rate of 15.2 words per minute. The team will now be expanding their trial to include more participants affected by severe paralysis and communication deficits.