It is hoped this type of non-invasive device could be used in the future to restore speech in patients who have lost the ability to properly communicate.
US scientists claim to have successfully translated thoughts to text in real time, using an AI-powered decoder.
A team based in the University of Texas claim this decoder is able to generate “intelligible word sequences” from individuals as they listened to a story or silently imagined one.
The researchers said previous brain to computer interfaces have been successful at translating speech from people’s thoughts but claimed these examples generally require “invasive neurosurgery”.
In the new study published in the scientific journal Nature, the team said their decoder can reconstruct speech in a non-invasive manner from functional magnetic resonance imaging (fMRI).
This is a special type of MRI scan that shows how different parts of the brain are working by analysing small changes associated with blood flow.
It is hoped that this type of technology could one day be used to restore speech in patients who have lost the ability to properly communicate as a result of an injury or disease.
Capturing the gist of sentences
To test this decoder, the researchers recorded the fMRI responses of three individuals while they listened to 16 hours of narrative stories. The decoder maintained a set of “candidate word sequences”, while a language model proposed continuations for each sequence.
The language model used was GPT-1, a precursor system to GPT-4, the current technology behind the popular ChatGPT.
“The decoder exactly reproduces some words and phrases and captures the gist of many more,” the researchers said in the study. “Decoder predictions for a test story were significantly more similar to the actual stimulus words than expected by chance under a range of language similarity metrics.”
The result of the decoder isn’t a fully exact transcript and is instead designed to capture the “gist” of what is being said. The researchers claim that their decoder was able to produce text that closely matched the intended meanings of the participant’s original words roughly half the time.
“For a non-invasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” said Alex Huth, an assistant professor who co-led the study.
“We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
The researchers also addressed concerns around the potential misuse of this technology, by having it work only with participants who willingly agreed to train the decoder.
The team claims the decoder can’t properly translate the thoughts of individuals who had not been used to train the decoder.
“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” said study co-lead Jerry Tang. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
Last year, researchers at Meta claimed their AI model was able to decode speech segments from three seconds of brain activity.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.