Nikitas Theodoropoulos
2026
BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data
Jaap Jumelet | Abdellah Fourtassi | Akari Haga | Bastian Bunzeck | Bhargav Shandilya | Diana Galvan-Sosa | Faiz Ghifari Haznitrama | Francesca Padovani | Francois Meyer | Hai Hu | Julen Etxaniz | Laurent Prevot | Linyang He | María Grandury | Mila Marcheva | Negar Foroutan | Nikitas Theodoropoulos | Pouya Sadeghi | Siyuan Song | Suchir Salhan | Susana Zhou | Yurii Paniv | Ziyin Zhang | Arianna Bisazza | Alex Warstadt | Leshem Choshen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Jaap Jumelet | Abdellah Fourtassi | Akari Haga | Bastian Bunzeck | Bhargav Shandilya | Diana Galvan-Sosa | Faiz Ghifari Haznitrama | Francesca Padovani | Francois Meyer | Hai Hu | Julen Etxaniz | Laurent Prevot | Linyang He | María Grandury | Mila Marcheva | Negar Foroutan | Nikitas Theodoropoulos | Pouya Sadeghi | Siyuan Song | Suchir Salhan | Susana Zhou | Yurii Paniv | Ziyin Zhang | Arianna Bisazza | Alex Warstadt | Leshem Choshen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
We present BabyBabelLM, a multilingual collection of datasets modeling the language a person observes from birth until they acquire a native language. We curate developmentally plausible pretraining data aiming to cover the equivalent of 100M English words of content in each of 45 languages. We compile evaluation suites and train baseline models in each language. BabyBabelLM aims to facilitate multilingual pretraining and cognitive modeling.
2024
BERTtime Stories: Investigating the Role of Synthetic Story Data in Language Pre-training
Nikitas Theodoropoulos | Giorgos Filandrianos | Vassilis Lyberatos | Maria Lymperaiou | Giorgos Stamou
The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning
Nikitas Theodoropoulos | Giorgos Filandrianos | Vassilis Lyberatos | Maria Lymperaiou | Giorgos Stamou
The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning
We describe our contribution to the Strict and Strict-Small tracks of the 2nd iteration of the BabyLM Challenge. The shared task is centered around efficient pre-training given data constraints motivated by human development. In response, we study the effect of synthetic story data in language pre-training using *TinyStories*: a recently introduced dataset of short stories. Initially, we train GPT-Neo models on subsets of *TinyStories*, while varying the amount of available data. We find that, even with access to less than 100M words, the models are able to generate high-quality, original completions to a given story, and acquire substantial linguistic knowledge. To measure the effect of synthetic story data, we train *LTG-BERT* encoder models on a combined dataset of: a subset of *TinyStories*, story completions generated by GPT-Neo, and a subset of the *BabyLM* dataset. Our experimentation reveals that synthetic data can occasionally offer modest gains, but overall have a negative influence on linguistic understanding. Our work offers an initial study on synthesizing story data in low resource settings and underscores their potential for augmentation in data-constrained language modeling. We publicly release our models and implementation on our GitHub.
Search
Fix author
Co-authors
- Arianna Bisazza 1
- Bastian Bunzeck 1
- Leshem Choshen 1
- Julen Etxaniz 1
- Giorgos Filandrianos 1
- Negar Foroutan 1
- Abdellah Fourtassi 1
- Diana Galván-Sosa 1
- María Grandury 1
- Akari Haga 1
- Faiz Ghifari Haznitrama 1
- Linyang He 1
- Hai Hu 1
- Jaap Jumelet 1
- Vassilis Lyberatos 1
- Maria Lymperaiou 1
- Mila Marcheva 1
- Francois Meyer 1
- Francesca Padovani 1
- Yurii Paniv 1
- Laurent Prévot 1
- Pouya Sadeghi 1
- Suchir Salhan 1
- Bhargav Shandilya 1
- Siyuan Song 1
- Giorgos Stamou 1
- Alex Warstadt 1
- Ziyin Zhang 1
- Susana Zhou 1