Scientists can now “decode” people’s thoughts without even touching their heads, The scientist reported.
Ancient mind-reading techniques relied on implanting electrodes deep into people’s brains. The new method, described in a report published Sept. 29 in the preprint database bioRxiv, relies instead on a noninvasive brain scanning technique called functional magnetic resonance imaging (fMRI).
fMRI tracks the flow of oxygenated blood through the brain, and because active brain cells need more energy and oxygen, this information provides an indirect measure of brain activity.
By its nature, this scanning method cannot capture brain activity in real time, because electrical signals from brain cells travel much faster than blood circulates through the brain.
But remarkably, the study authors found that they could still use this imperfect indirect measure to decode the semantic meaning of people’s thoughts, although they could not produce word-for-word translations.
“If you had asked any cognitive neuroscientist in the world 20 years ago if this was doable, he would have made you laugh,” said lead author Alexander Huth, a neuroscientist at the University of Texas at Austin. The scientist.
Related: ‘Universal language network’ identified in the brain
For the new study, which has yet to be peer-reviewed, the team scanned the brains of a woman and two men in their 20s and 30s. Each participant listened to a total of 16 hours of different podcasts and radio shows over multiple sessions in the scanner.
The team then fed those scans to a computer algorithm they called a “decoder,” which compared patterns in the audio to patterns in recorded brain activity.
The algorithm could then take an fMRI recording and generate a story based on its content, and that story would match the original plot of the podcast or radio show ‘pretty well’,” Huth said. The scientist.
In other words, the decoder could infer which story each participant had heard based on their brain activity.
That said, the algorithm made a few mistakes, like changing character pronouns and using first and third person. He “knows what’s going on pretty precisely, but not who’s doing things,” Huth said.
In additional tests, the algorithm was able to fairly accurately explain the plot of a silent movie that participants watched in the scanner. It might even tell a story that participants imagined telling in their heads.
In the long term, the research team aims to develop this technology so that it can be used in brain-computer interfaces designed for people who cannot speak or type.
Learn more about the new decoding algorithm in The scientist.
Related content:
This article was originally published by Live Science. Read the original article here.
#technique #decoding #peoples #thoughts #performed #remotely