CILDA: Contrastive Data Augmentation Using Intermediate Layer Knowledge Distillation

Md Akmal Haidar, Mehdi Rezagholizadeh, Abbas Ghaddar, Khalil Bibi, Phillippe Langlais, Pascal Poupart


Abstract
Knowledge distillation (KD) is an efficient framework for compressing large-scale pre-trained language models. Recent years have seen a surge of research aiming to improve KD by leveraging Contrastive Learning, Intermediate Layer Distillation, Data Augmentation, and Adversarial Training. In this work, we propose a learning-based data augmentation technique tailored for knowledge distillation, called CILDA. To the best of our knowledge, this is the first time that intermediate layer representations of the main task are used in improving the quality of augmented samples. More precisely, we introduce an augmentation technique for KD based on intermediate layer matching using contrastive loss to improve masked adversarial data augmentation. CILDA outperforms existing state-of-the-art KD approaches on the GLUE benchmark, as well as in an out-of-domain evaluation.
Anthology ID:
2022.coling-1.417
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
4707–4713
Language:
URL:
https://aclanthology.org/2022.coling-1.417
DOI:
Bibkey:
Cite (ACL):
Md Akmal Haidar, Mehdi Rezagholizadeh, Abbas Ghaddar, Khalil Bibi, Phillippe Langlais, and Pascal Poupart. 2022. CILDA: Contrastive Data Augmentation Using Intermediate Layer Knowledge Distillation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4707–4713, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
CILDA: Contrastive Data Augmentation Using Intermediate Layer Knowledge Distillation (Haidar et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.417.pdf
Data
GLUEQNLI