Decoding Semantic Representations in the Brain Under Language Stimuli with Large Language Models

Anna Sato, Ichiro Kobayashi


Abstract
Brain decoding technology is paving the way for breakthroughs in the interpretation of neural activity to recreate thoughts, emotions, and movements. Tang et al. (2023) introduced a novel approach that uses language models as generative models for brain decoding based on functional magnetic resonance imaging (fMRI) data. Building on their work, this study explored the use of three additional language models along with the GPT model used in previous research to improve decoding accuracy. Furthermore, we added an evaluation metric using an embedding model, providing higher-level semantic similarity than the BERTScore. By comparing the decoding performance and identifying the factors contributing to good performance, we found that high decoding accuracy does not solely depend on the ability to accurately predict brain activity. Instead, the type of text (e.g., web text, blogs, news articles, and books) that the model tends to generate plays a more significant role in achieving more precise sentence reconstruction.
Anthology ID:
2025.wraicogs-1.6
Volume:
Proceedings of the First Workshop on Writing Aids at the Crossroads of AI, Cognitive Science and NLP (WRAICOGS 2025)
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Michael Zock, Kentaro Inui, Zheng Yuan
Venues:
WRAICOGS | WS
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
53–67
Language:
URL:
https://aclanthology.org/2025.wraicogs-1.6/
DOI:
Bibkey:
Cite (ACL):
Anna Sato and Ichiro Kobayashi. 2025. Decoding Semantic Representations in the Brain Under Language Stimuli with Large Language Models. In Proceedings of the First Workshop on Writing Aids at the Crossroads of AI, Cognitive Science and NLP (WRAICOGS 2025), pages 53–67, Abu Dhabi, UAE. International Committee on Computational Linguistics.
Cite (Informal):
Decoding Semantic Representations in the Brain Under Language Stimuli with Large Language Models (Sato & Kobayashi, WRAICOGS 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.wraicogs-1.6.pdf