On General Language Understanding

David Schlangen


Abstract
Natural Language Processing prides itself to be an empirically-minded, if not outright empiricist field, and yet lately it seems to get itself into essentialist debates on issues of meaning and measurement (“Do Large Language Models Understand Language, And If So, How Much?”). This is not by accident: Here, as everywhere, the evidence underspecifies the understanding. As a remedy, this paper sketches the outlines of a model of understanding, which can ground questions of the adequacy of current methods of measurement of model quality. The paper makes three claims: A) That different language use situation types have different characteristics, B) That language understanding is a multifaceted phenomenon, bringing together individualistic and social processes, and C) That the choice of Understanding Indicator marks the limits of benchmarking, and the beginnings of considerations of the ethics of NLP use.
Anthology ID:
2023.findings-emnlp.591
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8818–8825
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.591
DOI:
10.18653/v1/2023.findings-emnlp.591
Bibkey:
Cite (ACL):
David Schlangen. 2023. On General Language Understanding. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8818–8825, Singapore. Association for Computational Linguistics.
Cite (Informal):
On General Language Understanding (Schlangen, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.591.pdf