Injecting structural hints: Using language models to study inductive biases in language learning

Isabel Papadimitriou, Dan Jurafsky


Abstract
Both humans and transformer language models are able to learn language without explicit structural supervision. What cognitive inductive biases make this learning possible? Here, we examine the effect of different inductive learning biases by actively controlling the inductive biases of artificial learners: we structurally bias models by pretraining on synthetic formally-structured data, and evaluate these structural biases by fine-tuning on three typologically-distant human languages: English, Japanese, and Basque. We investigate the effect on downstream language perplexity of three types of inductive bias: 1) recursive, hierarchical processing 2) unrestricted token-token dependencies that can’t be modeled by context-free grammars, and 3) a Zipfian power-law vocabulary distribution. We show that complex, non-context-free interactions between tokens form the best inductive biases. Our study leverages the capabilities of transformer models to run controlled language learning experiments that are not possible to run on humans, and surfaces hypotheses about the structures that facilitate language learning in both humans and machines.
Anthology ID:
2023.findings-emnlp.563
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8402–8413
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.563
DOI:
10.18653/v1/2023.findings-emnlp.563
Bibkey:
Cite (ACL):
Isabel Papadimitriou and Dan Jurafsky. 2023. Injecting structural hints: Using language models to study inductive biases in language learning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8402–8413, Singapore. Association for Computational Linguistics.
Cite (Informal):
Injecting structural hints: Using language models to study inductive biases in language learning (Papadimitriou & Jurafsky, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.563.pdf