BiLD: Bi-directional Logits Difference Loss for Large Language Model Distillation

Minchong Li, Feng Zhou, Xiaohui Song


Abstract
In recent years, large language models (LLMs) have shown exceptional capabilities across various natural language processing (NLP) tasks. However, such impressive performance often comes with the trade-off of an increased parameter size, posing significant challenges for widespread deployment. Knowledge distillation (KD) provides a solution by transferring knowledge from a large teacher model to a smaller student model. In this paper, we explore the task-specific distillation of LLMs at the logit level. Our investigation reveals that the logits of fine-tuned LLMs exhibit a more extreme long-tail distribution than those from vision models, with hidden “noise” in the long tail affecting distillation performance. Furthermore, existing logits distillation methods often struggle to effectively utilize the internal ranking information from the logits. To address these, we propose the Bi-directional Logits Difference (BiLD) loss. The BiLD loss filters out the long-tail noise by utilizing only top-k teacher and student logits, and leverages the internal logits ranking information by constructing logits differences. To evaluate BiLD loss, we conduct comprehensive experiments on 13 datasets using two types of LLMs. Our results show that the BiLD loss, with only the top-8 logits, outperforms supervised fine-tuning (SFT), vanilla KL loss, and five other distillation methods from both NLP and CV fields.
Anthology ID:
2025.coling-main.78
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1168–1182
Language:
URL:
https://aclanthology.org/2025.coling-main.78/
DOI:
Bibkey:
Cite (ACL):
Minchong Li, Feng Zhou, and Xiaohui Song. 2025. BiLD: Bi-directional Logits Difference Loss for Large Language Model Distillation. In Proceedings of the 31st International Conference on Computational Linguistics, pages 1168–1182, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
BiLD: Bi-directional Logits Difference Loss for Large Language Model Distillation (Li et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.78.pdf