Exploring Quantization for Efficient Pre-Training of Transformer Language Models

Kamran Chitsaz, Quentin Fournier, Goncalo Mordido, Sarath Chandar


Abstract
The increasing scale of Transformer models has led to an increase in their pre-training computational requirements. While quantization has proven to be effective after pre-training and during fine-tuning, applying quantization in Transformers during pre-training has remained largely unexplored at scale for language modeling. This study aims to explore the impact of quantization for efficient pre-training of Transformers, with a focus on linear layer components. By systematically applying straightforward linear quantization to weights, activations, gradients, and optimizer states, we assess its effects on model efficiency, stability, and performance during training. By offering a comprehensive recipe of effective quantization strategies to be applied during the pre-training of Transformers, we promote high training efficiency from scratch while retaining language modeling ability.
Anthology ID:
2024.findings-emnlp.787
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13473–13487
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.787/
DOI:
10.18653/v1/2024.findings-emnlp.787
Bibkey:
Cite (ACL):
Kamran Chitsaz, Quentin Fournier, Goncalo Mordido, and Sarath Chandar. 2024. Exploring Quantization for Efficient Pre-Training of Transformer Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 13473–13487, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Exploring Quantization for Efficient Pre-Training of Transformer Language Models (Chitsaz et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.787.pdf