Nora Aguirre-Celis


2021

pdf bib
Understanding the Semantic Space: How Word Meanings Dynamically Adapt in the Context of a Sentence
Nora Aguirre-Celis | Risto Miikkulainen
Proceedings of the 2021 Workshop on Semantic Spaces at the Intersection of NLP, Physics, and Cognitive Science (SemSpace)

How do people understand the meaning of the word “small” when used to describe a mosquito, a church, or a planet? While humans have a remarkable ability to form meanings by combining existing concepts, modeling this process is challenging. This paper addresses that challenge through CEREBRA (Context-dEpendent meaning REpresentations in the BRAin) neural network model. CEREBRA characterizes how word meanings dynamically adapt in the context of a sentence by decomposing sentence fMRI into words and words into embodied brain-based semantic features. It demonstrates that words in different contexts have different representations and the word meaning changes in a way that is meaningful to human subjects. CEREBRA’s context-based representations can potentially be used to make NLP applications more human-like.

2020

pdf bib
Characterizing Dynamic Word Meaning Representations in the Brain
Nora Aguirre-Celis | Risto Miikkulainen
Proceedings of the Workshop on the Cognitive Aspects of the Lexicon

During sentence comprehension, humans adjust word meanings according to the combination of the concepts that occur in the sentence. This paper presents a neural network model called CEREBRA (Context-dEpendent meaning REpresentation in the BRAin) that demonstrates this process based on fMRI sentence patterns and the Concept Attribute Rep-resentation (CAR) theory. In several experiments, CEREBRA is used to quantify conceptual combination effect and demonstrate that it matters to humans. Such context-based representations could be used in future natural language processing systems allowing them to mirror human performance more accurately.