Nathan Paek


2024

pdf bib
What should Baby Models read? Exploring Sample-Efficient Data Composition on Model Performance
Hong Meng Yam | Nathan Paek
The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning

We explore the impact of pre-training data composition on the performance of small language models in a sample-efficient setting. Using datasets capped at 10 million words, we evaluate several data sources—including child-directed speech (CHILDES), classic fiction (Gutenberg), a mixed dataset (Mix), and synthetic TinyStories—across different model sizes ranging from 18 million to 705 million parameters. Our experiments show that smaller models (e.g., GPT2-18M and GPT2-44M) benefit from training on diverse datasets like Mix, achieving better performance on linguistic benchmarks. In contrast, larger models (e.g., GPT2-97M, GPT2-705M, and LLaMA-360M) perform better when trained on more complex and rich datasets like Gutenberg. Models trained on the CHILDES and TinyStories datasets underperformed across all model sizes. These findings suggest that the optimal dataset for sample-efficient training depends on the model size, and that neither child-directed speech nor simplified stories are optimal for small language models of all sizes. We highlight the importance of considering both dataset composition and model capacity for effective sample-efficient language model training.

pdf bib
Teaching Tiny Minds: Exploring Methods to Enhance Knowledge Distillation for Small Language Models
Hong Meng Yam | Nathan Paek
The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning

In this paper, we build off of the success of the previous BabyLM challenge winner’s model, BabyLlama, to explore various methods of enhancing knowledge distillation for small language models. Our main focus is on investigating how small a language model can be while still maintaining competitive performance. We experiment with three main approaches: (1) DistilledGPT-44M, which uses smaller teacher models and a more compact student model compared to BabyLlama; (2) ContrastiveLlama-58M, which incorporates contrastive loss into the knowledge distillation process; and (3) MaskedAdversarialLlama-58M, incorporates adversarial loss into the knowledge distillation process. Using the 10M-word dataset from the BabyLM challenge’s strict-small track, we evaluate our models on the BLiMP, EWoK, and GLUE benchmarks. Our results show that effective knowledge distillation can still be achieved with significantly smaller teacher and student models. In particular, our model DistilledGPT-44M is able to achieve better performance than one of last year’s winning entries, LTG-BERT, while achieving similar performance but cutting training time by around 70% and parameters by around 25% compared to the other winning entry, BabyLlama.