Beyond Repetition: Text Simplification and Curriculum Learning for Data-Constrained Pretraining

Matthew Theodore Roque, Dan John Velasco


Abstract
Most language model pretraining studies assume large data volumes, leaving open how to improve pretraining in data-constrained settings beyond repeated exposure. In such settings, the effects of training data order and of including alternative versions of the same text remain underexplored. We address this by studying curriculum learning in pretraining, focusing on text-complexity ordering and data augmentation via simplification. We ask: (1) Does simplifying texts enhance representation quality more than reusing the original data?; and (2) Does ordering data by text complexity yield better representations? To answer, we simplify a high-quality English dataset using a large language model and test four data schedules: (1) repeated exposure, (2) low-to-high complexity, (3) high-to-low, and (4) interleaved. We analyze models’ representation quality from a sample-efficiency perspective via fine-tuning, as well as its zero-shot performance on linguistic knowledge, entity tracking, world knowledge, and commonsense reasoning. Our findings show that adding simplified data improves fine-tuning and zero-shot performance over repeated exposure baseline: smaller models benefit from low-to-high complexity, while larger models perform better with interleaved ordering.
Anthology ID:
2025.babylm-main.19
Volume:
Proceedings of the First BabyLM Workshop
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Lucas Charpentier, Leshem Choshen, Ryan Cotterell, Mustafa Omer Gul, Michael Y. Hu, Jing Liu, Jaap Jumelet, Tal Linzen, Aaron Mueller, Candace Ross, Raj Sanjay Shah, Alex Warstadt, Ethan Gotlieb Wilcox, Adina Williams
Venue:
BabyLM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
246–255
Language:
URL:
https://aclanthology.org/2025.babylm-main.19/
DOI:
Bibkey:
Cite (ACL):
Matthew Theodore Roque and Dan John Velasco. 2025. Beyond Repetition: Text Simplification and Curriculum Learning for Data-Constrained Pretraining. In Proceedings of the First BabyLM Workshop, pages 246–255, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Beyond Repetition: Text Simplification and Curriculum Learning for Data-Constrained Pretraining (Roque & Velasco, BabyLM 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.babylm-main.19.pdf