Erika Petersen


2023

pdf bib
Lexical Semantics with Large Language Models: A Case Study of English “break”
Erika Petersen | Christopher Potts
Findings of the Association for Computational Linguistics: EACL 2023

Large neural language models (LLMs) can be powerful tools for research in lexical semantics. We illustrate this potential using the English verb “break”, which has numerous senses and appears in a wide range of syntactic frames. We show that LLMs capture known sense distinctions and can be used to identify informative new sense combinations for further analysis. More generally, we argue that LLMs are aligned with lexical semantic theories in providing high-dimensional, contextually modulated representations, but LLMs’ lack of discrete features and dependence on usage-based data offer a genuinely new perspective on traditional problems in lexical semantics.