Enhancing One-Shot Pruned Pre-trained Language Models through Sparse-Dense-Sparse Mechanism

Guanchen Li, Xiandong Zhao, Lian Liu, Zeping Li, Yixing Xu, Dong Li, Lu Tian, Jie He, Ashish Sirasao, Emad Barsoum


Abstract
Pre-trained language models (PLMs) are engineered to be robust in contextual understanding and exhibit outstanding performance in various natural language processing tasks. However, their considerable size incurs significant computational and storage costs. Modern pruning strategies employ retraining-free one-shot techniques to compress PLMs; however, these approaches often lead to an indispensable reduction in performance. In this paper, we propose SDS, a Sparse-Dense-Sparse pruning framework to enhance the performance of the pruned PLMs from a weight distribution optimization perspective. We outline the pruning process in three steps. Initially, we prune less critical connections in the model using conventional one-shot pruning methods. Next, we reconstruct a dense model featuring a pruning-friendly weight distribution by reactivating pruned connections with sparse regularization. Finally, we perform a second pruning round, yielding a superior pruned model compared to the initial pruning. Experiments demonstrate that SDS outperforms the state-of-the-art pruning techniques SparseGPT and Wanda under an identical sparsity configuration. For instance, SDS reduces perplexity by 5.16 on Raw-Wikitext2 and improves average accuracy by 3.86% across multiple zero-shot benchmarks for LLaMA-3-8B compared to Wanda with 2:4 sparsity.
Anthology ID:
2025.coling-main.117
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1718–1735
Language:
URL:
https://aclanthology.org/2025.coling-main.117/
DOI:
Bibkey:
Cite (ACL):
Guanchen Li, Xiandong Zhao, Lian Liu, Zeping Li, Yixing Xu, Dong Li, Lu Tian, Jie He, Ashish Sirasao, and Emad Barsoum. 2025. Enhancing One-Shot Pruned Pre-trained Language Models through Sparse-Dense-Sparse Mechanism. In Proceedings of the 31st International Conference on Computational Linguistics, pages 1718–1735, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Enhancing One-Shot Pruned Pre-trained Language Models through Sparse-Dense-Sparse Mechanism (Li et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.117.pdf