Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models

Siqi Wang, Zhengyu Chen, Bei Li, Keqing He, Min Zhang, Jingang Wang


Abstract
The scaling of large language models (LLMs) is a critical research area for the efficiency and effectiveness of model training and deployment. Our work investigates the transferability and discrepancies of scaling laws between Dense Models and Mixture of Experts (MoE) models. Through a combination of theoretical analysis and extensive experiments, including consistent loss scaling, optimal batch size/learning rate scaling, and resource allocation strategies scaling, our findings reveal that the power-law scaling framework also applies to MoE Models, indicating that the fundamental principles governing the scaling behavior of these models are preserved, even though the architecture differs. Additionally, MoE Models demonstrate superior generalization, resulting in lower testing losses with the same training compute budget compared to Dense Models. These findings indicate the scaling consistency and transfer generalization capabilities of MoE Models, providing new insights for optimizing MoE Model training and deployment strategies.
Anthology ID:
2024.emnlp-main.319
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5583–5595
Language:
URL:
https://aclanthology.org/2024.emnlp-main.319
DOI:
Bibkey:
Cite (ACL):
Siqi Wang, Zhengyu Chen, Bei Li, Keqing He, Min Zhang, and Jingang Wang. 2024. Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 5583–5595, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large Language Models (Wang et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.319.pdf