Language Model Behavior: A Comprehensive Survey

Tyler A. Chang, Benjamin K. Bergen


Abstract
Transformer language models have received widespread public attention, yet their generated text is often surprising even to NLP researchers. In this survey, we discuss over 250 recent studies of English language model behavior before task-specific fine-tuning. Language models possess basic capabilities in syntax, semantics, pragmatics, world knowledge, and reasoning, but these capabilities are sensitive to specific inputs and surface features. Despite dramatic increases in generated text quality as models scale to hundreds of billions of parameters, the models are still prone to unfactual responses, commonsense errors, memorized text, and social biases. Many of these weaknesses can be framed as over-generalizations or under-generalizations of learned patterns in text. We synthesize recent results to highlight what is currently known about large language model capabilities, thus providing a resource for applied work and for research in adjacent fields that use language models.
Anthology ID:
2024.cl-1.9
Volume:
Computational Linguistics, Volume 50, Issue 1 - March 2024
Month:
March
Year:
2024
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
293–350
Language:
URL:
https://aclanthology.org/2024.cl-1.9
DOI:
10.1162/coli_a_00492
Bibkey:
Cite (ACL):
Tyler A. Chang and Benjamin K. Bergen. 2024. Language Model Behavior: A Comprehensive Survey. Computational Linguistics, 50(1):293–350.
Cite (Informal):
Language Model Behavior: A Comprehensive Survey (Chang & Bergen, CL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.cl-1.9.pdf