MRI and AI to process your thoughts

 


Scientists have combined the functional magnetic resonance imaging (fMRI) with artificial intelligence (AI) language models to translate internal experiences into words. The hybrid technology has resulted in a decoder that can reproduce, with a high level of accuracy, the stories that a person listened to or imagined telling in the scanner. The decoder technology is still in its infancy and needs to be trained extensively for each person who uses it. Researchers say that the AI language system can make informed guesses about the words that evoked brain activity just by looking at fMRI brain scans.

The technology could help people who can’t speak or otherwise outwardly communicate such as those who have suffered strokes or are living with amyotrophic lateral sclerosis. The current brain-computer interfaces require the implantation of devices in the brain, but neuroscientists hope to use non-invasive techniques such as fMRI to decipher internal speech without the need for surgery.

Unlike previous decoding systems that relied on sensors placed directly on the surface of the brain, the Texas team's approach is a more freeform thought attempt. The approach decodes the meaning behind the words, allowing for potential applications beyond communication. This new kind of approach could help solve one of the biggest scientific medical challenges of understanding mental illness.

Related: A Brain Scanner Combined with an AI Language Model Can Provide a Glimpse into Your Thoughts

The researchers had three participants spend up to 16 hours each in a functional MRI scanner, wearing headphones that streamed audio from podcasts. The MRI scans detected brain activity across the brain, not just in areas associated with speech and language. The participants listened to hours of stories, and the MRI data was sent to a computer that learned to match specific patterns of brain activity with certain streams of words.

The computer attempted to reconstruct these stories from each participant's brain activity by using an early version of the famous natural language processing program ChatGPT. While the reconstructed stories were paraphrased versions of what the participants heard, they contained errors. In another experiment, the system was able to paraphrase words a person imagined saying. In a third experiment, participants watched videos without using words, and the system produced a language description of what was going on in the video.

The technology can be used to enhance the ability to decode fMRI brain scans to connect with people who can’t outwardly communicate. The study showed a high level of accuracy, between 72 and 82 percent of the time in the stories, the decoder was more accurate at decoding their meaning than would be expected from random chance. The technology still struggles with grammatical features such as pronouns and cannot decipher proper nouns such as names and places.

The researchers emphasize the need to enact proactive policies that protect the privacy of one’s internal mental processes as the decoder cannot construct an exact transcript of the words they heard or imagined. The decoder could even guess the story behind a short film that someone watched in the scanner, though with less accuracy.

The research is a “proof of concept that language can be decoded from noninvasive recordings of brain activity,” says Jerry Tang, a computational neuroscientist at the University of Texas at Austin and the study’s lead author. There’s a lot more information in brain data than we initially thought, Tang adds. The current results are better than anything we had before in fMRI language decoding. The technology is still in its infancy, but the results look really good.

The MRI-based system is currently slower and less accurate than an experimental communication system being developed for paralyzed people by a team led by Dr. Edward Chang at the University of California, San Francisco. The experimental communication system requires people to get a sheet of electrical sensors implanted directly on the surface of the brain, which records brain activity close to the source. At least one person has been able to use the system to accurately generate 15 words a minute using only his thoughts.

However, with an MRI-based system, no one has to undergo surgery. Future versions of these systems could raise ethical questions, particularly if they could read a person's thoughts without their cooperation. Nonetheless, the technology provides individuals with a new way of communicating and could become a useful tool in the future.

The benefits of incorporating science and AI in brain scanning are numerous. For one, these systems provide a non-invasive window on language and thought, allowing researchers to explore how the brain processes language and information. Additionally, they have the potential to revolutionize the lives of individuals who are unable to speak because of brain injuries or disease, providing them with new avenues for communication.

These systems could offer a new way for people to communicate with each other in the future, further bridging the gap between individuals who are unable to communicate due to language or other barriers. The future of AI and brain scanning is bright, and the possibilities are endless. With more research and development, we may one day be able to fully decode the mysteries of the brain and unlock its full potential.

Comments

Popular posts from this blog

Archaeologists Harness Artificial Intelligence (AI) for Instant Translation of 5,000-Year-Old Cuneiform Tablets

Amazon uses AI to increase delivery speed

AI News Anchors: A Quiet Takeover in India Raises Job Security Concerns