Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization

Aref Jafari, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart, Ali Ghodsi


Abstract
Knowledge Distillation (KD) has been extensively used for natural language understanding (NLU) tasks to improve a small model’s (a student) generalization by transferring the knowledge from a larger model (a teacher). Although KD methods achieve state-of-the-art performance in numerous settings, they suffer from several problems limiting their performance. It is shown in the literature that the capacity gap between the teacher and the student networks can make KD ineffective. Additionally, existing KD techniques do not mitigate the noise in the teacher’s output: modeling the noisy behaviour of the teacher can distract the student from learning more useful features. We propose a new KD method that addresses these problems and facilitates the training compared to previous techniques. Inspired by continuation optimization, we design a training procedure that optimizes the highly non-convex KD objective by starting with the smoothed version of this objective and making it more complex as the training proceeds. Our method (Continuation-KD) achieves state-of-the-art performance across various compact architectures on NLU (GLUE benchmark) and computer vision tasks (CIFAR-10 and CIFAR-100).
Anthology ID:
2022.findings-emnlp.385
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5260–5269
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.385
DOI:
10.18653/v1/2022.findings-emnlp.385
Bibkey:
Cite (ACL):
Aref Jafari, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart, and Ali Ghodsi. 2022. Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5260–5269, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization (Jafari et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.385.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.385.mp4