Structured Pruning Learns Compact and Accurate Models

Mengzhou Xia, Zexuan Zhong, Danqi Chen


Abstract
The growing size of neural language models has led to increased attention in model compression. The two predominant approaches are pruning, which gradually removes weights from a pre-trained model, and distillation, which trains a smaller compact model to match a larger one. Pruning methods can significantly reduce the model size but hardly achieve large speedups as distillation. However, distillation methods require large amounts of unlabeled data and are expensive to train. In this work, we propose a task-specific structured pruning method CoFi (Coarse- and Fine-grained Pruning), which delivers highly parallelizable subnetworks and matches the distillation methods in both accuracy and latency, without resorting to any unlabeled data. Our key insight is to jointly prune coarse-grained (e.g., layers) and fine-grained (e.g., heads and hidden units) modules, which controls the pruning decision of each parameter with masks of different granularity. We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization. Our experiments on GLUE and SQuAD datasets show that CoFi yields models with over 10X speedups with a small accuracy drop, showing its effectiveness and efficiency compared to previous pruning and distillation approaches.
Anthology ID:
2022.acl-long.107
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1513–1528
Language:
URL:
https://aclanthology.org/2022.acl-long.107
DOI:
10.18653/v1/2022.acl-long.107
Bibkey:
Cite (ACL):
Mengzhou Xia, Zexuan Zhong, and Danqi Chen. 2022. Structured Pruning Learns Compact and Accurate Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1513–1528, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Structured Pruning Learns Compact and Accurate Models (Xia et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-long.107.pdf
Software:
 2022.acl-long.107.software.zip
Video:
 https://aclanthology.org/2022.acl-long.107.mp4
Code
 princeton-nlp/cofipruning +  additional community code
Data
CoLAGLUEMRPCMultiNLIQNLISQuADSSTSST-2