Assessing LLMs’ Understanding of Structural Contrasts in the Lexicon

Shuxu Li, Antoine Venant, Philippe Langlais, François Lareau


Abstract
We present a new benchmark to evaluate the lexical competence of large language models (LLMs), built on a hierarchical classification of lexical functions (LFs) within the Meaning-Text Theory (MTT) framework. Based on a dataset called French Lexical Network (LN-fr), the benchmark employs contrastive tasks to probe the models’ sensitivity to fine-grained paradigmatic and syntagmatic distinctions. Our results show that performance varies significantly across different LFs and systematically declines with increased distinction granularity, highlighting current LLMs’ limitations in relational and structured lexical understanding.
Anthology ID:
2025.iwcs-main.9
Volume:
Proceedings of the 16th International Conference on Computational Semantics
Month:
September
Year:
2025
Address:
Düsseldorf, Germany
Editors:
Kilian Evang, Laura Kallmeyer, Sylvain Pogodalla
Venue:
IWCS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
98–109
Language:
URL:
https://aclanthology.org/2025.iwcs-main.9/
DOI:
Bibkey:
Cite (ACL):
Shuxu Li, Antoine Venant, Philippe Langlais, and François Lareau. 2025. Assessing LLMs’ Understanding of Structural Contrasts in the Lexicon. In Proceedings of the 16th International Conference on Computational Semantics, pages 98–109, Düsseldorf, Germany. Association for Computational Linguistics.
Cite (Informal):
Assessing LLMs’ Understanding of Structural Contrasts in the Lexicon (Li et al., IWCS 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.iwcs-main.9.pdf