Smaller Language Models are capable of selecting Instruction-Tuning Training Data for Larger Language Models

Dheeraj Mekala, Alex Nguyen, Jingbo Shang


Abstract
Instruction-tuning language models has become a crucial step in aligning them for general use. Typically, this process involves extensive training on large datasets, incurring high training costs. In this paper, we introduce a novel training data selection based on the learning percentage of the samples. We assert that current language models possess the capability to autonomously select high-quality training data, leading to comparable or improved performance compared to training on the entire dataset. Our experiments span different-sized models, revealing that this characteristic holds for models ranging from 1B (small) to 13B (large) in size. Moreover, we demonstrate an interesting finding that the data hardness transfers across model sizes, and a smaller 350M model can effectively curate high-quality training data with hard samples for a larger 13B model, resulting in an equally or superior instruction-tuned model compared to training on the complete dataset. Utilizing open-sourced OPT and Llama-2 models up to 13B in size, two publicly available instruction-tuning training datasets and evaluated by both automatic metrics & humans, our paper introduces a novel approach to training data selection, showcasing a more efficient alternative.
Anthology ID:
2024.findings-acl.623
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10456–10470
Language:
URL:
https://aclanthology.org/2024.findings-acl.623
DOI:
10.18653/v1/2024.findings-acl.623
Bibkey:
Cite (ACL):
Dheeraj Mekala, Alex Nguyen, and Jingbo Shang. 2024. Smaller Language Models are capable of selecting Instruction-Tuning Training Data for Larger Language Models. In Findings of the Association for Computational Linguistics: ACL 2024, pages 10456–10470, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Smaller Language Models are capable of selecting Instruction-Tuning Training Data for Larger Language Models (Mekala et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.623.pdf