Understanding the Semantic Space: How Word Meanings Dynamically Adapt in the Context of a Sentence

Nora Aguirre-Celis, Risto Miikkulainen


Abstract
How do people understand the meaning of the word “small” when used to describe a mosquito, a church, or a planet? While humans have a remarkable ability to form meanings by combining existing concepts, modeling this process is challenging. This paper addresses that challenge through CEREBRA (Context-dEpendent meaning REpresentations in the BRAin) neural network model. CEREBRA characterizes how word meanings dynamically adapt in the context of a sentence by decomposing sentence fMRI into words and words into embodied brain-based semantic features. It demonstrates that words in different contexts have different representations and the word meaning changes in a way that is meaningful to human subjects. CEREBRA’s context-based representations can potentially be used to make NLP applications more human-like.
Anthology ID:
2021.semspace-1.1
Volume:
Proceedings of the 2021 Workshop on Semantic Spaces at the Intersection of NLP, Physics, and Cognitive Science (SemSpace)
Month:
June
Year:
2021
Address:
Groningen, The Netherlands
Editors:
Martha Lewis, Mehrnoosh Sadrzadeh
Venue:
SemSpace
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–11
Language:
URL:
https://aclanthology.org/2021.semspace-1.1
DOI:
Bibkey:
Cite (ACL):
Nora Aguirre-Celis and Risto Miikkulainen. 2021. Understanding the Semantic Space: How Word Meanings Dynamically Adapt in the Context of a Sentence. In Proceedings of the 2021 Workshop on Semantic Spaces at the Intersection of NLP, Physics, and Cognitive Science (SemSpace), pages 1–11, Groningen, The Netherlands. Association for Computational Linguistics.
Cite (Informal):
Understanding the Semantic Space: How Word Meanings Dynamically Adapt in the Context of a Sentence (Aguirre-Celis & Miikkulainen, SemSpace 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.semspace-1.1.pdf