Self-training Large Language Models through Knowledge Detection

Yeo Wei Jie, Teddy Ferdinan, Przemyslaw Kazienko, Ranjan Satapathy, Erik Cambria


Abstract
Large language models (LLMs) often necessitate extensive labeled datasets and training compute to achieve impressive performance across downstream tasks. This paper explores a self-training paradigm, where the LLM autonomously curates its own labels and selectively trains on unknown data samples identified through a reference-free consistency method. Empirical evaluations demonstrate significant improvements in reducing hallucination in generation across multiple subjects. Furthermore, the selective training framework mitigates catastrophic forgetting in out-of-distribution benchmarks, addressing a critical limitation in training LLMs. Our findings suggest that such an approach can substantially reduce the dependency on large labeled datasets, paving the way for more scalable and cost-effective language model training.
Anthology ID:
2024.findings-emnlp.883
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15033–15045
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.883
DOI:
10.18653/v1/2024.findings-emnlp.883
Bibkey:
Cite (ACL):
Yeo Wei Jie, Teddy Ferdinan, Przemyslaw Kazienko, Ranjan Satapathy, and Erik Cambria. 2024. Self-training Large Language Models through Knowledge Detection. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 15033–15045, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Self-training Large Language Models through Knowledge Detection (Wei Jie et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.883.pdf