Sparsity-Accelerated Training for Large Language Models

Da Ma, Lu Chen, Pengyu Wang, Hongshen Xu, Hanqi Li, Liangtai Sun, Su Zhu, Shuai Fan, Kai Yu


Abstract
Large language models (LLMs) have demonstrated proficiency across various natural language processing (NLP) tasks but often require additional training, such as continual pre-training and supervised fine-tuning. However, the costs associated with this, primarily due to their large parameter count, remain high. This paper proposes leveraging sparsity in pre-trained LLMs to expedite this training process. By observing sparsity in activated neurons during forward iterations, we identify the potential for computational speed-ups by excluding inactive neurons. We address associated challenges by extending existing neuron importance evaluation metrics and introducing a ladder omission rate scheduler. Our experiments on Llama-2 demonstrate that Sparsity-Accelerated Training (SAT) achieves comparable or superior performance to standard training while significantly accelerating the process. Specifically, SAT achieves a 45% throughput improvement in continual pre-training and saves 38% training time in supervised fine-tuning. It offers a simple, hardware-agnostic, and easily deployable framework for additional LLM training.
Anthology ID:
2024.findings-acl.875
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14696–14707
Language:
URL:
https://aclanthology.org/2024.findings-acl.875
DOI:
10.18653/v1/2024.findings-acl.875
Bibkey:
Cite (ACL):
Da Ma, Lu Chen, Pengyu Wang, Hongshen Xu, Hanqi Li, Liangtai Sun, Su Zhu, Shuai Fan, and Kai Yu. 2024. Sparsity-Accelerated Training for Large Language Models. In Findings of the Association for Computational Linguistics ACL 2024, pages 14696–14707, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Sparsity-Accelerated Training for Large Language Models (Ma et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.875.pdf