Balanced Data Sampling for Language Model Training with Clustering

Yunfan Shao, Linyang Li, Zhaoye Fei, Hang Yan, Dahua Lin, Xipeng Qiu


Abstract
Data plays a fundamental role in the training of Large Language Models (LLMs). While attention has been paid to the collection and composition of datasets, determining the data sampling strategy in training remains an open question. Most LLMs are trained with a simple strategy, random sampling. However, this sampling strategy ignores the unbalanced nature of training data distribution, which can be sub-optimal. In this paper, we propose ClusterClip Sampling to balance the text distribution of training data for better model training. Specifically, ClusterClip Sampling utilizes data clustering to reflect the data distribution of the training set and balances the common samples and rare samples during training based on the cluster results. A repetition clip operation is introduced to mitigate the overfitting issue led by samples from certain clusters. Extensive experiments validate the effectiveness of ClusterClip Sampling, which outperforms random sampling and other cluster-based sampling variants under various training datasets and large language models.
Anthology ID:
2024.findings-acl.833
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14012–14023
Language:
URL:
https://aclanthology.org/2024.findings-acl.833
DOI:
Bibkey:
Cite (ACL):
Yunfan Shao, Linyang Li, Zhaoye Fei, Hang Yan, Dahua Lin, and Xipeng Qiu. 2024. Balanced Data Sampling for Language Model Training with Clustering. In Findings of the Association for Computational Linguistics ACL 2024, pages 14012–14023, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Balanced Data Sampling for Language Model Training with Clustering (Shao et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.833.pdf