A Proposal for Scaling the Scaling Laws

Wout Schellaert, Ronan Hamon, Fernando Martínez-Plumed, Jose Hernandez-Orallo


Abstract
Scaling laws are predictable relations between the performance of AI systems and various scalable design choices such as model or dataset size. In order to keep predictions interpretable, scaling analysis has traditionally relied on heavy summarisation of both the system design and its performance. We argue this summarisation and aggregation is a major source of predictive inaccuracy and lack of generalisation. With a synthetic example we show how scaling analysis needs to be _instance-based_ to accurately model realistic benchmark behaviour, highlighting the need for richer evaluation datasets and more complex inferential tools, for which we outline an actionable proposal.
Anthology ID:
2024.scalellm-1.1
Volume:
Proceedings of the First edition of the Workshop on the Scaling Behavior of Large Language Models (SCALE-LLM 2024)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Antonio Valerio Miceli-Barone, Fazl Barez, Shay Cohen, Elena Voita, Ulrich Germann, Michal Lukasik
Venues:
SCALE-LLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–8
Language:
URL:
https://aclanthology.org/2024.scalellm-1.1
DOI:
Bibkey:
Cite (ACL):
Wout Schellaert, Ronan Hamon, Fernando Martínez-Plumed, and Jose Hernandez-Orallo. 2024. A Proposal for Scaling the Scaling Laws. In Proceedings of the First edition of the Workshop on the Scaling Behavior of Large Language Models (SCALE-LLM 2024), pages 1–8, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
A Proposal for Scaling the Scaling Laws (Schellaert et al., SCALE-LLM-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.scalellm-1.1.pdf