Low-Confidence Gold: Refining Low-Confidence Samples for Efficient Instruction Tuning

Hongyi Cai, Jie Li, Mohammad Mahdinur Rahman, Wenzhen Dong


Abstract
The effectiveness of instruction fine-tuning for Large Language Models is fundamentally constrained by the quality and efficiency of training datasets. This work introduces Low-Confidence Gold (LCG), a novel filtering framework that employs centroid-based clustering and confidence-guided selection for identifying valuable instruction pairs. Through a semi-supervised approach using a lightweight classifier trained on representative samples, LCG curates high-quality subsets while preserving data diversity. Experimental evaluation demonstrates that models fine-tuned on LCG-filtered subsets of 6K samples achieve superior performance compared to existing methods, with substantial improvements on MT-bench and consistent gains across comprehensive evaluation metrics. The framework’s efficacy while maintaining model performance establishes a promising result for efficient instruction tuning.
Anthology ID:
2025.findings-emnlp.437
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8233–8240
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.437/
DOI:
Bibkey:
Cite (ACL):
Hongyi Cai, Jie Li, Mohammad Mahdinur Rahman, and Wenzhen Dong. 2025. Low-Confidence Gold: Refining Low-Confidence Samples for Efficient Instruction Tuning. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 8233–8240, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Low-Confidence Gold: Refining Low-Confidence Samples for Efficient Instruction Tuning (Cai et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.437.pdf
Checklist:
 2025.findings-emnlp.437.checklist.pdf