Large GPT-like Models are Bad Babies: A Closer Look at the Relationship between Linguistic Competence and Psycholinguistic Measures

Julius Steuer, Marius Mosbach, Dietrich Klakow


Anthology ID:
2023.conll-babylm.12
Volume:
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
Month:
December
Year:
2023
Address:
Singapore
Editors:
Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mosquera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, Ryan Cotterell
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
142–157
Language:
URL:
https://aclanthology.org/2023.conll-babylm.12
DOI:
10.18653/v1/2023.conll-babylm.12
Bibkey:
Cite (ACL):
Julius Steuer, Marius Mosbach, and Dietrich Klakow. 2023. Large GPT-like Models are Bad Babies: A Closer Look at the Relationship between Linguistic Competence and Psycholinguistic Measures. In Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, pages 142–157, Singapore. Association for Computational Linguistics.
Cite (Informal):
Large GPT-like Models are Bad Babies: A Closer Look at the Relationship between Linguistic Competence and Psycholinguistic Measures (Steuer et al., CoNLL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.conll-babylm.12.pdf