Dynamic Knowledge Distillation for Pre-trained Language Models

Lei Li, Yankai Lin, Shuhuai Ren, Peng Li, Jie Zhou, Xu Sun


Abstract
Knowledge distillation (KD) has been proved effective for compressing large-scale pre-trained language models. However, existing methods conduct KD statically, e.g., the student model aligns its output distribution to that of a selected teacher model on the pre-defined training dataset. In this paper, we explore whether a dynamic knowledge distillation that empowers the student to adjust the learning procedure according to its competency, regarding the student performance and learning efficiency. We explore the dynamical adjustments on three aspects: teacher model adoption, data selection, and KD objective adaptation. Experimental results show that (1) proper selection of teacher model can boost the performance of student model; (2) conducting KD with 10% informative instances achieves comparable performance while greatly accelerates the training; (3) the student performance can be boosted by adjusting the supervision contribution of different alignment objective. We find dynamic knowledge distillation is promising and provide discussions on potential future directions towards more efficient KD methods.
Anthology ID:
2021.emnlp-main.31
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
379–389
Language:
URL:
https://aclanthology.org/2021.emnlp-main.31
DOI:
10.18653/v1/2021.emnlp-main.31
Bibkey:
Cite (ACL):
Lei Li, Yankai Lin, Shuhuai Ren, Peng Li, Jie Zhou, and Xu Sun. 2021. Dynamic Knowledge Distillation for Pre-trained Language Models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Dynamic Knowledge Distillation for Pre-trained Language Models (Li et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.31.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.31.mp4
Code
 lancopku/dynamickd
Data
CoLAIMDb Movie ReviewsMRPCMultiNLISSTSST-5