Initialization of Large Language Models via Reparameterization to Mitigate Loss Spikes

Kosuke Nishida, Kyosuke Nishida, Kuniko Saito


Abstract
Loss spikes, a phenomenon in which the loss value diverges suddenly, is a fundamental issue in the pre-training of large language models. This paper supposes that the non-uniformity of the norm of the parameters is one of the causes of loss spikes. Here, in training of neural networks, the scale of the gradients is required to be kept constant throughout the layers to avoid the vanishing and exploding gradients problem. However, to meet these requirements in the Transformer model, the norm of the model parameters must be non-uniform, and thus, parameters whose norm is smaller are more sensitive to the parameter update. To address this issue, we propose a novel technique, weight scaling as reparameterization (WeSaR). WeSaR introduces a gate parameter per parameter matrix and adjusts it to the value satisfying the requirements. Because of the gate parameter, WeSaR sets the norm of the original parameters uniformly, which results in stable training. Experimental results with the Transformer decoders consisting of 130 million, 1.3 billion, and 13 billion parameters showed that WeSaR stabilizes and accelerates training and that it outperformed compared methods including popular initialization methods.
Anthology ID:
2024.emnlp-main.1264
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22699–22714
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1264
DOI:
Bibkey:
Cite (ACL):
Kosuke Nishida, Kyosuke Nishida, and Kuniko Saito. 2024. Initialization of Large Language Models via Reparameterization to Mitigate Loss Spikes. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 22699–22714, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Initialization of Large Language Models via Reparameterization to Mitigate Loss Spikes (Nishida et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1264.pdf