Characterizing Learning Curves During Language Model Pre-Training: Learning, Forgetting, and Stability

Tyler A. Chang, Zhuowen Tu, Benjamin K. Bergen


Abstract
How do language models learn to make predictions during pre-training? To study this, we extract learning curves from five autoregressive English language model pre-training runs, for 1M unseen tokens in context. We observe that the language models generate short repetitive phrases before learning to generate longer and more coherent text. We also find that individual tokens often exhibit sudden increases or decreases in loss that are surprisingly consistent across pre-training runs. To better understand these fluctuations, we quantify the final surprisal, within-run variability, age of acquisition, forgettability, and cross-run variability of learning curves for individual tokens in context. More frequent tokens reach lower final surprisals, exhibit less variability within and across pre-training runs, are learned earlier, and are less likely to be “forgotten” during pre-training. Higher n-gram probabilities further accentuate these effects. Independent of the target token, shorter and more frequent contexts correlate with marginally more stable and quickly acquired predictions. Based on our results, we argue for the existence of sequential learning dependencies between different model capabilities, and we characterize language model learning as early n-gram learning before gradual refinement of tail n-gram predictions.
Anthology ID:
2024.tacl-1.74
Volume:
Transactions of the Association for Computational Linguistics, Volume 12
Month:
Year:
2024
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
1346–1362
Language:
URL:
https://aclanthology.org/2024.tacl-1.74
DOI:
10.1162/tacl_a_00708
Bibkey:
Cite (ACL):
Tyler A. Chang, Zhuowen Tu, and Benjamin K. Bergen. 2024. Characterizing Learning Curves During Language Model Pre-Training: Learning, Forgetting, and Stability. Transactions of the Association for Computational Linguistics, 12:1346–1362.
Cite (Informal):
Characterizing Learning Curves During Language Model Pre-Training: Learning, Forgetting, and Stability (Chang et al., TACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.tacl-1.74.pdf