Generalizations across filler-gap dependencies in neural language models

Katherine Howitt, Sathvik Nair, Allison Dods, Robert Melvin Hopkins


Abstract
Humans develop their grammars by making structural generalizations from finite input. We ask how filler-gap dependencies (FGDs), which share a structural generalization despite diverse surface forms, might arise from the input. We explicitly control the input to a neural language model (NLM) to uncover whether the model posits a shared representation for FGDs. We show that while NLMs do have success differentiating grammatical from ungrammatical FGDs, they rely on superficial properties of the input, rather than on a shared generalization. Our work highlights the need for specific linguistic inductive biases to model language acquisition.
Anthology ID:
2024.conll-1.21
Volume:
Proceedings of the 28th Conference on Computational Natural Language Learning
Month:
November
Year:
2024
Address:
Miami, FL, USA
Editors:
Libby Barak, Malihe Alikhani
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
269–279
Language:
URL:
https://aclanthology.org/2024.conll-1.21
DOI:
Bibkey:
Cite (ACL):
Katherine Howitt, Sathvik Nair, Allison Dods, and Robert Melvin Hopkins. 2024. Generalizations across filler-gap dependencies in neural language models. In Proceedings of the 28th Conference on Computational Natural Language Learning, pages 269–279, Miami, FL, USA. Association for Computational Linguistics.
Cite (Informal):
Generalizations across filler-gap dependencies in neural language models (Howitt et al., CoNLL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.conll-1.21.pdf