Emergent Inabilities? Inverse Scaling Over the Course of Pretraining

James Michaelov, Ben Bergen


Abstract
Does inverse scaling only occur as a function of model size, or can it also occur over the course of training? We carry out an exploratory study investigating whether the performance of language models on specific tasks can decrease (while general performance remains high) during training on the language modeling task. We find 8 tasks on which Pythia 12B (Biderman et al., 2023) shows decreased performance over the course of training. Five of these tasks (TruthfulQA-MC1, TruthfulQA-MC2, Hindsight Neglect, Memo Trap, and Pattern Match Suppression) additionally show a consistent relationship whereby larger language models show a greater decrease in performance the more they are trained, despite showing standard (positive) scaling overall. This highlights the importance of testing performance at all relevant benchmarks any time models are trained on additional data, even if their overall performance improves.
Anthology ID:
2023.findings-emnlp.973
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14607–14615
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.973
DOI:
10.18653/v1/2023.findings-emnlp.973
Bibkey:
Cite (ACL):
James Michaelov and Ben Bergen. 2023. Emergent Inabilities? Inverse Scaling Over the Course of Pretraining. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14607–14615, Singapore. Association for Computational Linguistics.
Cite (Informal):
Emergent Inabilities? Inverse Scaling Over the Course of Pretraining (Michaelov & Bergen, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.973.pdf