Gradient-based Intra-attention Pruning on Pre-trained Language Models

Ziqing Yang, Yiming Cui, Xin Yao, Shijin Wang


Abstract
Pre-trained language models achieve superior performance but are computationally expensive. Techniques such as pruning and knowledge distillation have been developed to reduce their sizes and latencies. In this work, we propose a structured pruning method GRAIN (gradient-based intra-attention pruning), which performs task-specific pruning with knowledge distillation and yields highly effective models. Different from common approaches that prune each attention head as a whole, GRAIN inspects and prunes intra-attention structures, which greatly expands the structure search space and enables more flexible models. We also propose a gradient separation strategy that reduces the interference of distillation on pruning for a better combination of the two approaches. Experiments on GLUE, SQuAD, and CoNLL 2003 show that GRAIN notably outperforms other methods, especially in the high sparsity regime, and achieves 6 7x speedups while maintaining 93% 99% performance. Under extreme compression where only 3% transformer weights remain, the pruned model is still competitive compared to larger models.
Anthology ID:
2023.acl-long.156
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2775–2790
Language:
URL:
https://aclanthology.org/2023.acl-long.156
DOI:
10.18653/v1/2023.acl-long.156
Bibkey:
Cite (ACL):
Ziqing Yang, Yiming Cui, Xin Yao, and Shijin Wang. 2023. Gradient-based Intra-attention Pruning on Pre-trained Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2775–2790, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Gradient-based Intra-attention Pruning on Pre-trained Language Models (Yang et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.156.pdf
Video:
 https://aclanthology.org/2023.acl-long.156.mp4