AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression

Siyue Wu, Hongzhan Chen, Xiaojun Quan, Qifan Wang, Rui Wang


Abstract
Knowledge distillation has attracted a great deal of interest recently to compress large language models. However, existing knowledge distillation methods suffer from two limitations. First, the student model simply imitates the teacher’s behavior while ignoring the reasoning behind it. Second, these methods usually focus on the transfer of sophisticated model-specific knowledge but overlook data-specific knowledge. In this paper, we present a novel attribution-driven knowledge distillation approach, which explores the token-level rationale behind the teacher model based on Integrated Gradients (IG) and transfers attribution knowledge to the student model. To enhance the knowledge transfer of model reasoning and generalization, we further explore multi-view attribution distillation on all potential decisions of the teacher. Comprehensive experiments are conducted with BERT on the GLUE benchmark. The experimental results demonstrate the superior performance of our approach to several state-of-the-art methods.
Anthology ID:
2023.acl-long.471
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8449–8465
Language:
URL:
https://aclanthology.org/2023.acl-long.471
DOI:
10.18653/v1/2023.acl-long.471
Bibkey:
Cite (ACL):
Siyue Wu, Hongzhan Chen, Xiaojun Quan, Qifan Wang, and Rui Wang. 2023. AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8449–8465, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression (Wu et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.471.pdf
Video:
 https://aclanthology.org/2023.acl-long.471.mp4