Can training neural language models on a curriculum with developmentally plausible data improve alignment with human reading behavior?

Aryaman Chobey, Oliver Smith, Anzi Wang, Grusha Prasad


Anthology ID:
2023.conll-babylm.9
Volume:
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
Month:
December
Year:
2023
Address:
Singapore
Editors:
Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mosquera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, Ryan Cotterell
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
98–111
Language:
URL:
https://aclanthology.org/2023.conll-babylm.9
DOI:
10.18653/v1/2023.conll-babylm.9
Bibkey:
Cite (ACL):
Aryaman Chobey, Oliver Smith, Anzi Wang, and Grusha Prasad. 2023. Can training neural language models on a curriculum with developmentally plausible data improve alignment with human reading behavior?. In Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, pages 98–111, Singapore. Association for Computational Linguistics.
Cite (Informal):
Can training neural language models on a curriculum with developmentally plausible data improve alignment with human reading behavior? (Chobey et al., CoNLL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.conll-babylm.9.pdf