Allison Dods
2024
Generalizations across filler-gap dependencies in neural language models
Katherine Howitt
|
Sathvik Nair
|
Allison Dods
|
Robert Melvin Hopkins
Proceedings of the 28th Conference on Computational Natural Language Learning
Humans develop their grammars by making structural generalizations from finite input. We ask how filler-gap dependencies (FGDs), which share a structural generalization despite diverse surface forms, might arise from the input. We explicitly control the input to a neural language model (NLM) to uncover whether the model posits a shared representation for FGDs. We show that while NLMs do have success differentiating grammatical from ungrammatical FGDs, they rely on superficial properties of the input, rather than on a shared generalization. Our work highlights the need for specific linguistic inductive biases to model language acquisition.
Search