Mitigating Training Imbalance in LLM Fine-Tuning via Selective Parameter Merging

Yiming Ju, Ziyi Ni, Xingrun Xing, Zhixiong Zeng, Hanyu Zhao, Siqi Fan, Zheng Zhang


Abstract
Supervised fine-tuning (SFT) is crucial for adapting Large Language Models (LLMs) to specific tasks. In this work, we demonstrate that the order of training data can lead to significant training imbalances, potentially resulting in performance degradation. Consequently, we propose to mitigate this imbalance by merging SFT models fine-tuned with different data orders, thereby enhancing the overall effectiveness of SFT. Additionally, we introduce a novel technique, “parameter-selection merging,” which outperforms traditional weighted-average methods on five datasets. Further, through analysis and ablation studies, we validate the effectiveness of our method and identify the sources of performance improvements.
Anthology ID:
2024.emnlp-main.892
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15952–15959
Language:
URL:
https://aclanthology.org/2024.emnlp-main.892
DOI:
Bibkey:
Cite (ACL):
Yiming Ju, Ziyi Ni, Xingrun Xing, Zhixiong Zeng, Hanyu Zhao, Siqi Fan, and Zheng Zhang. 2024. Mitigating Training Imbalance in LLM Fine-Tuning via Selective Parameter Merging. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15952–15959, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Mitigating Training Imbalance in LLM Fine-Tuning via Selective Parameter Merging (Ju et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.892.pdf