Ziyi Ni


2024

pdf bib
Mitigating Training Imbalance in LLM Fine-Tuning via Selective Parameter Merging
Yiming Ju | Ziyi Ni | Xingrun Xing | Zhixiong Zeng | Hanyu Zhao | Siqi Fan | Zheng Zhang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Supervised fine-tuning (SFT) is crucial for adapting Large Language Models (LLMs) to specific tasks. In this work, we demonstrate that the order of training data can lead to significant training imbalances, potentially resulting in performance degradation. Consequently, we propose to mitigate this imbalance by merging SFT models fine-tuned with different data orders, thereby enhancing the overall effectiveness of SFT. Additionally, we introduce a novel technique, “parameter-selection merging,” which outperforms traditional weighted-average methods on five datasets. Further, through analysis and ablation studies, we validate the effectiveness of our method and identify the sources of performance improvements.