Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text

Qianhui Wu, Huiqiang Jiang, Haonan Yin, Börje Karlsson, Chin-Yew Lin


Abstract
Self-supervised representation learning has proved to be a valuable component for out-of-distribution (OoD) detection with only the texts of in-distribution (ID) examples. These approaches either train a language model from scratch or fine-tune a pre-trained language model using ID examples, and then take the perplexity output by the language model as OoD scores. In this paper, we analyze the complementary characteristic of both methods and propose a multi-level knowledge distillation approach that integrates their strengths while mitigating their limitations. Specifically, we use a fine-tuned model as the teacher to teach a randomly initialized student model on the ID examples. Besides the prediction layer distillation, we present a similarity-based intermediate layer distillation method to thoroughly explore the representation space of the teacher model. In this way, the learned student can better represent the ID data manifold while gaining a stronger ability to map OoD examples outside the ID data manifold with the regularization inherited from pre-training. Besides, the student model sees only ID examples during parameter learning, further promoting more distinguishable features for OoD detection. We conduct extensive experiments over multiple benchmark datasets, i.e., CLINC150, SST, ROSTD, 20 NewsGroups, and AG News; showing that the proposed method yields new state-of-the-art performance. We also explore its application as an AIGC detector to distinguish answers generated by ChatGPT and human experts. It is observed that our model exceeds human evaluators in the pair-expert task on the Human ChatGPT Comparison Corpus.
Anthology ID:
2023.acl-long.403
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7317–7332
Language:
URL:
https://aclanthology.org/2023.acl-long.403
DOI:
10.18653/v1/2023.acl-long.403
Bibkey:
Cite (ACL):
Qianhui Wu, Huiqiang Jiang, Haonan Yin, Börje Karlsson, and Chin-Yew Lin. 2023. Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7317–7332, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text (Wu et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.403.pdf
Video:
 https://aclanthology.org/2023.acl-long.403.mp4