Mila Marcheva
2026
BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data
Jaap Jumelet | Abdellah Fourtassi | Akari Haga | Bastian Bunzeck | Bhargav Shandilya | Diana Galvan-Sosa | Faiz Ghifari Haznitrama | Francesca Padovani | Francois Meyer | Hai Hu | Julen Etxaniz | Laurent Prevot | Linyang He | María Grandury | Mila Marcheva | Negar Foroutan | Nikitas Theodoropoulos | Pouya Sadeghi | Siyuan Song | Suchir Salhan | Susana Zhou | Yurii Paniv | Ziyin Zhang | Arianna Bisazza | Alex Warstadt | Leshem Choshen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Jaap Jumelet | Abdellah Fourtassi | Akari Haga | Bastian Bunzeck | Bhargav Shandilya | Diana Galvan-Sosa | Faiz Ghifari Haznitrama | Francesca Padovani | Francois Meyer | Hai Hu | Julen Etxaniz | Laurent Prevot | Linyang He | María Grandury | Mila Marcheva | Negar Foroutan | Nikitas Theodoropoulos | Pouya Sadeghi | Siyuan Song | Suchir Salhan | Susana Zhou | Yurii Paniv | Ziyin Zhang | Arianna Bisazza | Alex Warstadt | Leshem Choshen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
We present BabyBabelLM, a multilingual collection of datasets modeling the language a person observes from birth until they acquire a native language. We curate developmentally plausible pretraining data aiming to cover the equivalent of 100M English words of content in each of 45 languages. We compile evaluation suites and train baseline models in each language. BabyBabelLM aims to facilitate multilingual pretraining and cognitive modeling.
2025
Profiling neural grammar induction on morphemically tokenised child-directed speech
Mila Marcheva | Theresa Biberauer | Weiwei Sun
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Mila Marcheva | Theresa Biberauer | Weiwei Sun
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
We investigate the performance of state-of-the-art (SotA) neural grammar induction (GI) models on a morphemically tokenised English dataset based on the CHILDES treebank (Pearl and Sprouse, 2013). Using implementations from Yang et al. (2021a), we train models and evaluate them with the standard F1 score. We introduce novel evaluation metrics—depth-of-morpheme and sibling-of-morpheme—which measure phenomena around bound morpheme attachment. Our results reveal that models with the highest F1 scores do not necessarily induce linguistically plausible structures for bound morpheme attachment, highlighting a key challenge for cognitively plausible GI.
Search
Fix author
Co-authors
- Theresa Biberauer 1
- Arianna Bisazza 1
- Bastian Bunzeck 1
- Leshem Choshen 1
- Julen Etxaniz 1
- Negar Foroutan 1
- Abdellah Fourtassi 1
- Diana Galván-Sosa 1
- María Grandury 1
- Akari Haga 1
- Faiz Ghifari Haznitrama 1
- Linyang He 1
- Hai Hu 1
- Jaap Jumelet 1
- Francois Meyer 1
- Francesca Padovani 1
- Yurii Paniv 1
- Laurent Prévot 1
- Pouya Sadeghi 1
- Suchir Salhan 1
- Bhargav Shandilya 1
- Siyuan Song 1
- Weiwei Sun 1
- Nikitas Theodoropoulos 1
- Alex Warstadt 1
- Ziyin Zhang 1
- Susana Zhou 1