Self-Evolution Fine-Tuning for Policy Optimization

Ruijun Chen, Jiehao Liang, Shiping Gao, Fanqi Wan, Xiaojun Quan


Abstract
The alignment of large language models (LLMs) is crucial not only for unlocking their potential in specific tasks but also for ensuring that responses meet human expectations and adhere to safety and ethical principles. To address the challenges of current alignment methodologies, we introduce self-evolution fine-tuning (SEFT) for LLM alignment, aiming to eliminate the need for annotated samples while retaining the stability and efficiency of SFT. SEFT first trains an adaptive reviser to elevate low-quality responses while maintaining high-quality ones. The reviser then gradually guides the policy’s optimization by fine-tuning it with enhanced responses. The method excels in utilizing unlimited unannotated data to optimize policies via supervised fine-tuning. Our experiments on AlpacaEval and MT-Bench demonstrate the effectiveness of SEFT and its advantages over existing alignment techniques.
Anthology ID:
2024.findings-emnlp.238
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4120–4137
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.238
DOI:
Bibkey:
Cite (ACL):
Ruijun Chen, Jiehao Liang, Shiping Gao, Fanqi Wan, and Xiaojun Quan. 2024. Self-Evolution Fine-Tuning for Policy Optimization. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 4120–4137, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Self-Evolution Fine-Tuning for Policy Optimization (Chen et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.238.pdf