What should Baby Models read? Exploring Sample-Efficient Data Composition on Model Performance

Hong Meng Yam, Nathan Paek


Abstract
We explore the impact of pre-training data composition on the performance of small language models in a sample-efficient setting. Using datasets capped at 10 million words, we evaluate several data sources—including child-directed speech (CHILDES), classic fiction (Gutenberg), a mixed dataset (Mix), and synthetic TinyStories—across different model sizes ranging from 18 million to 705 million parameters. Our experiments show that smaller models (e.g., GPT2-18M and GPT2-44M) benefit from training on diverse datasets like Mix, achieving better performance on linguistic benchmarks. In contrast, larger models (e.g., GPT2-97M, GPT2-705M, and LLaMA-360M) perform better when trained on more complex and rich datasets like Gutenberg. Models trained on the CHILDES and TinyStories datasets underperformed across all model sizes. These findings suggest that the optimal dataset for sample-efficient training depends on the model size, and that neither child-directed speech nor simplified stories are optimal for small language models of all sizes. We highlight the importance of considering both dataset composition and model capacity for effective sample-efficient language model training.
Anthology ID:
2024.conll-babylm.25
Volume:
The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning
Month:
November
Year:
2024
Address:
Miami, FL, USA
Editors:
Michael Y. Hu, Aaron Mueller, Candace Ross, Adina Williams, Tal Linzen, Chengxu Zhuang, Leshem Choshen, Ryan Cotterell, Alex Warstadt, Ethan Gotlieb Wilcox
Venues:
CoNLL | BabyLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
284–291
Language:
URL:
https://aclanthology.org/2024.conll-babylm.25/
DOI:
Bibkey:
Cite (ACL):
Hong Meng Yam and Nathan Paek. 2024. What should Baby Models read? Exploring Sample-Efficient Data Composition on Model Performance. In The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning, pages 284–291, Miami, FL, USA. Association for Computational Linguistics.
Cite (Informal):
What should Baby Models read? Exploring Sample-Efficient Data Composition on Model Performance (Yam & Paek, CoNLL-BabyLM 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.conll-babylm.25.pdf