Decoding inner speech from brain signals

September 9, 2025

Decoding inner speech from brain signals

At a Glance

  • Scientists designed a brain-computer interface to decode inner speech in real time from activity in the brain鈥檚 motor cortex.
  • The findings could help some people with paralysis to communicate more easily than current interfaces, which are based on attempted speech.
Image
A woman looking at a screen.
Using the inner speech neuroprosthesis, a participant reads the cued sentence in the text above. The text below shows what鈥檚 being decoded in real-time as she imagines speaking the sentence. 
Emory BrainGate Team

Brain-computer interfaces (BCIs) show promise for allowing people with paralysis to communicate. These devices use machine learning to translate electrical signals within the brain into words. Current systems need their users to attempt to physically speak.

Decoding inner speech鈥攖houghts expressed as words in a person鈥檚 head鈥攃ould inform the design of practical speech BCIs. Decoding inner speech that doesn鈥檛 involve attempting to speak could make speech output easier for some users. But beyond the technical challenge of decoding inner speech, one concern is the possibility of exposing thoughts that the person didn鈥檛 intend to say out loud.

A research team led by Dr. Erin Kunz, Benyamin Abramovich Krasa, and Dr. Francis Willett at Stanford University studied how inner speech is represented in the brain compared with attempted speech. They also explored whether a BCI could decode such speech, and whether unintentional output could be prevented. Results appeared in Cell on August 14, 2025.

The team began by recording activity in the brain鈥檚 motor cortex from four participants. All had impaired speech due to either ALS (amyotrophic lateral sclerosis) or stroke. Participants either attempted to speak or imagined saying a set of words. For all participants, the team found that words could be decoded from motor cortex activity in both cases.

Attempted and inner speech evoked similar patterns of neural activity. But attempted speech was associated with stronger signals on average than inner speech. This allowed the decoder to distinguish attempted from inner speech.

Next, the participants imagined speaking whole sentences while a BCI decoded the sentences in real time. Error rates were between 14% and 33% for a 50-word vocabulary, and between 26% and 54% for a 125,000-word vocabulary. These participants had severe weakness in the muscles used for speech. They preferred using imagined speech over attempted speech, mainly because of the lower physical effort required.

To test whether the system could also decode unintentional inner speech, participants performed non-verbal tasks involving sequence recall and counting. A speech BCI was able to decode the memorized sequences and the counted numbers from participants鈥 brain signals.

The team devised two strategies to prevent BCIs from decoding private inner speech. In one, they trained the decoder to distinguish attempted speech from inner speech and to silence the latter. This prevented decoding of inner speech while still effectively decoding attempted speech. Another strategy used a system that only decoded inner speech when it was first 鈥渦nlocked鈥 by detecting a specific keyword from the user. The system recognized the keyword more than 98% of the time. Some speech BCIs may require additional design elements to lock out decoding until the user intends to speak.

The findings suggest that attempted speech and inner speech are similarly represented in the brain鈥檚 motor cortex. BCIs can thus decode inner speech using neural signals, including private inner speech. But the study demonstrated strategies that can prevent unintentional decoding of private inner speech without sacrificing accuracy. This approach could help people who are unable to speak communicate more easily than BCIs using attempted speech.

鈥淭his work gives real hope that speech BCIs can one day restore communication that is as fluent, natural, and comfortable as conversational speech,鈥 Willett says. 

鈥攂y Brian Doctrow, Ph.D.

Related Links

References

Kunz EM, Abramovich Krasa B, Kamdar F, Avansino DT, Hahn N, Yoon S, Singh A, Nason-Tomaszewski SR, Card NS, Jude JJ, Jacques BG, Bechefsky PH, Iacobacci C, Hochberg LR, Rubin DB, Williams ZM, Brandman DM, Stavisky SD, AuYong N, Pandarinath C, Druckmann S, Henderson JM, Willett FR. Cell. 2025 Aug 21;188(17):4658-4673.e17. doi: 10.1016/j.cell.2025.06.015. Epub 2025 Aug 14. PMID: 40816265. 

Funding

51视频鈥檚 National Institute on Deafness and Other Communication Disorders (NIDCD), National Institute of Neurological Disorders and Stroke (NINDS), and Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD); Office of the Assistant Secretary of Defense for Health Affairs; Simons Foundation; A.P. Giannini Foundation; Office of Research and Development, Department of Veterans Affairs; Wu Tsai Neurosciences Institute; Howard Hughes Medical Institute; Larry and Pamela Garlick; Blavatnik Family Foundation; National Science Foundation.