Stratified Selective Sampling for Instruction Tuning with Dedicated Scoring Strategy

Paramita Mirza, Lucas Weber, Fabian Küch


Abstract
Recent work shows that post-training datasets for LLMs can be substantially downsampled without noticeably deteriorating performance. However, data selection often incurs high computational costs or is limited to narrow domains. In this paper, we demonstrate that data selection can be both—efficient and universal—by using a multi-step pipeline in which we efficiently bin data points into groups, estimate quality using specialized models, and score difficulty with a robust, lightweight method. Task-based categorization allows us to control the composition of our final data—crucial for finetuning multi-purpose models. To guarantee diversity, we improve upon previous work using embedding models and a clustering algorithm. This integrated strategy enables high-performance fine-tuning with minimal overhead.
Anthology ID:
2025.findings-emnlp.1086
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19949–19974
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.1086/
DOI:
Bibkey:
Cite (ACL):
Paramita Mirza, Lucas Weber, and Fabian Küch. 2025. Stratified Selective Sampling for Instruction Tuning with Dedicated Scoring Strategy. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 19949–19974, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Stratified Selective Sampling for Instruction Tuning with Dedicated Scoring Strategy (Mirza et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.1086.pdf
Checklist:
 2025.findings-emnlp.1086.checklist.pdf