Teacher-Student Training for Debiasing: General Permutation Debiasing for Large Language Models

Adian Liusie, Yassir Fathullah, Mark Gales


Abstract
Large Language Models (LLMs) have demonstrated impressive zero-shot capabilities and versatility in NLP tasks, however they sometimes fail to maintain crucial invariances for specific tasks. One example is permutation sensitivity, where LLMs’ outputs may significantly vary depending on the order of the input options. While debiasing techniques can mitigate these issues, and yield better performance and reliability, they often come with a high computational cost at inference. This paper addresses this inefficiency at inference time. The aim is to distill the capabilities of a computationally intensive, debiased, teacher model into a more compact student model. We explore two variants of student models: one based on pure distillation, and the other on an error-correction approach for more complex tasks, where the student corrects a single biased decision from the teacher to achieve a debiased output. Our approach is general and can be applied to both black-box and white-box LLMs. Furthermore, we demonstrate that our compact, encoder-only student models can outperform their larger, biased teacher counterparts, achieving better results with significantly fewer parameters.
Anthology ID:
2024.findings-acl.81
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1376–1387
Language:
URL:
https://aclanthology.org/2024.findings-acl.81
DOI:
10.18653/v1/2024.findings-acl.81
Bibkey:
Cite (ACL):
Adian Liusie, Yassir Fathullah, and Mark Gales. 2024. Teacher-Student Training for Debiasing: General Permutation Debiasing for Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2024, pages 1376–1387, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Teacher-Student Training for Debiasing: General Permutation Debiasing for Large Language Models (Liusie et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.81.pdf