Rethinking Network Pruning – under the Pre-train and Fine-tune Paradigm

Dongkuan Xu, Ian En-Hsu Yen, Jinxi Zhao, Zhibin Xiao


Abstract
Transformer-based pre-trained language models have significantly improved the performance of various natural language processing (NLP) tasks in the recent years. While effective and prevalent, these models are usually prohibitively large for resource-limited deployment scenarios. A thread of research has thus been working on applying network pruning techniques under the pretrain-then-finetune paradigm widely adopted in NLP. However, the existing pruning results on benchmark transformers, such as BERT, are not as remarkable as the pruning results in the literature of convolutional neural networks (CNNs). In particular, common wisdom in pruning CNN states that sparse pruning technique compresses a model more than that obtained by reducing number of channels and layers, while existing works on sparse pruning of BERT yields inferior results than its small-dense counterparts such as TinyBERT. In this work, we aim to fill this gap by studying how knowledge are transferred and lost during the pre-train, fine-tune, and pruning process, and proposing a knowledge-aware sparse pruning process that achieves significantly superior results than existing literature. We show for the first time that sparse pruning compresses a BERT model significantly more than reducing its number of channels and layers. Experiments on multiple data sets of GLUE benchmark show that our method outperforms the leading competitors with a 20-times weight/FLOPs compression and neglectable loss in prediction accuracy.
Anthology ID:
2021.naacl-main.188
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2376–2382
Language:
URL:
https://aclanthology.org/2021.naacl-main.188
DOI:
10.18653/v1/2021.naacl-main.188
Bibkey:
Cite (ACL):
Dongkuan Xu, Ian En-Hsu Yen, Jinxi Zhao, and Zhibin Xiao. 2021. Rethinking Network Pruning – under the Pre-train and Fine-tune Paradigm. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2376–2382, Online. Association for Computational Linguistics.
Cite (Informal):
Rethinking Network Pruning – under the Pre-train and Fine-tune Paradigm (Xu et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.188.pdf
Optional supplementary code:
 2021.naacl-main.188.OptionalSupplementaryCode.zip
Video:
 https://aclanthology.org/2021.naacl-main.188.mp4
Code
 derronxu/sparsebert
Data
GLUEQNLISQuAD