Cost-effective Distillation of Large Language Models

Sayantan Dasgupta, Trevor Cohn, Timothy Baldwin


Abstract
Knowledge distillation (KD) involves training a small “student” model to replicate the strong performance of a high-capacity “teacher” model, enabling efficient deployment in resource-constrained settings. Top-performing methods tend to be task- or architecture-specific and lack generalizability. Several existing approaches require pretraining of the teacher on task-specific datasets, which can be costly for large and unstable for small datasets. Here we propose an approach for improving KD through a novel distillation loss agnostic to the task and model architecture. We successfully apply our method to the distillation of the BERT-base and achieve highly competitive results from the distilled student across a range of GLUE tasks, especially for tasks with smaller datasets.
Anthology ID:
2023.findings-acl.463
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7346–7354
Language:
URL:
https://aclanthology.org/2023.findings-acl.463
DOI:
10.18653/v1/2023.findings-acl.463
Bibkey:
Cite (ACL):
Sayantan Dasgupta, Trevor Cohn, and Timothy Baldwin. 2023. Cost-effective Distillation of Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7346–7354, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Cost-effective Distillation of Large Language Models (Dasgupta et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.463.pdf
Video:
 https://aclanthology.org/2023.findings-acl.463.mp4