BAPO: Base-Anchored Preference Optimization for Overcoming Forgetting in Large Language Models Personalization

Gihun Lee, Minchan Jeong, Yujin Kim, Hojung Jung, Jaehoon Oh, SangMook Kim, Se-Young Yun


Abstract
While learning to align Large Language Models (LLMs) with human preferences has shown remarkable success, aligning these models to meet the diverse user preferences presents further challenges in preserving previous knowledge. This paper examines the impact of personalized preference optimization on LLMs, revealing that the extent of knowledge loss varies significantly with preference heterogeneity. Although previous approaches have utilized the KL constraint between the reference model and the policy model, we observe that they fail to maintain general knowledge and alignment when facing personalized preferences. To this end, we introduce Base-Anchored Preference Optimization (BAPO), a simple yet effective approach that utilizes the initial responses of reference model to mitigate forgetting while accommodating personalized alignment. BAPO effectively adapts to diverse user preferences while minimally affecting global knowledge or general alignment. Our experiments demonstrate the efficacy of BAPO in various setups.
Anthology ID:
2024.findings-emnlp.398
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6804–6820
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.398
DOI:
Bibkey:
Cite (ACL):
Gihun Lee, Minchan Jeong, Yujin Kim, Hojung Jung, Jaehoon Oh, SangMook Kim, and Se-Young Yun. 2024. BAPO: Base-Anchored Preference Optimization for Overcoming Forgetting in Large Language Models Personalization. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 6804–6820, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
BAPO: Base-Anchored Preference Optimization for Overcoming Forgetting in Large Language Models Personalization (Lee et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.398.pdf
Software:
 2024.findings-emnlp.398.software.zip