Co-training and Co-distillation for Quality Improvement and Compression of Language Models

Hayeon Lee, Rui Hou, Jongpil Kim, Davis Liang, Hongbo Zhang, Sung Hwang, Alexander Min


Abstract
Knowledge Distillation (KD) compresses computationally expensive pre-trained language models (PLMs) by transferring their knowledge to smaller models, allowing their use in resource-constrained or real-time settings. However, most smaller models fail to surpass the performance of the original larger model, resulting in sacrificing performance to improve inference speed. To address this issue, we propose Co-Training and Co-Distillation (CTCD), a novel framework that improves performance and inference speed together by co-training two models while mutually distilling knowledge. The CTCD framework successfully achieves this based on two significant findings: 1) Distilling knowledge from the smaller model to the larger model during co-training improves the performance of the larger model. 2) The enhanced performance of the larger model further boosts the performance of the smaller model. The CTCD framework shows promise as it can be combined with existing techniques like architecture design or data augmentation, replacing one-way KD methods, to achieve further performance improvement. Extensive ablation studies demonstrate the effectiveness of CTCD, and the small model distilled by CTCD outperforms the original larger model by a significant margin of 1.66 on the GLUE benchmark.
Anthology ID:
2023.findings-emnlp.500
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7458–7467
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.500
DOI:
10.18653/v1/2023.findings-emnlp.500
Bibkey:
Cite (ACL):
Hayeon Lee, Rui Hou, Jongpil Kim, Davis Liang, Hongbo Zhang, Sung Hwang, and Alexander Min. 2023. Co-training and Co-distillation for Quality Improvement and Compression of Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7458–7467, Singapore. Association for Computational Linguistics.
Cite (Informal):
Co-training and Co-distillation for Quality Improvement and Compression of Language Models (Lee et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.500.pdf