Rethinking Pruning Large Language Models: Benefits and Pitfalls of Reconstruction Error Minimization

Sungbin Shin, Wonpyo Park, Jaeho Lee, Namhoon Lee


Abstract
This work suggests fundamentally rethinking the current practice of pruning large language models (LLMs). The way it is done is by divide and conquer: split the model into submodels, sequentially prune them, and reconstruct predictions of the dense counterparts on small calibration data one at a time; the final model is obtained simply by putting the resulting sparse submodels together. While this approach enables pruning under memory constraints, it generates high reconstruction errors. In this work, we first present an array of reconstruction techniques that can significantly reduce this error by more than 90%. Unwittingly, however, we discover that minimizing reconstruction error is not always ideal and can overfit the given calibration data, resulting in rather increased language perplexity and poor performance at downstream tasks. We find out that a strategy of self-generating calibration data can mitigate this trade-off between reconstruction and generalization, suggesting new directions in the presence of both benefits and pitfalls of reconstruction for pruning LLMs.
Anthology ID:
2024.emnlp-main.68
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1182–1191
Language:
URL:
https://aclanthology.org/2024.emnlp-main.68
DOI:
Bibkey:
Cite (ACL):
Sungbin Shin, Wonpyo Park, Jaeho Lee, and Namhoon Lee. 2024. Rethinking Pruning Large Language Models: Benefits and Pitfalls of Reconstruction Error Minimization. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 1182–1191, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Rethinking Pruning Large Language Models: Benefits and Pitfalls of Reconstruction Error Minimization (Shin et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.68.pdf