Rethinking DPO: The Role of Rejected Responses in Preference Misalignment

Jae Hyeon Cho, JunHyeok Oh, Myunsoo Kim, Byung-Jun Lee


Abstract
Direct Preference Optimization (DPO) is a simple and efficient framework that has attracted substantial attention. However, it often struggles to meet its primary objectives—increasing the generation probability of chosen responses while reducing that of rejected responses—due to the dominant influence of rejected responses on the loss function. This imbalance leads to suboptimal performance in promoting preferred responses. In this work, we systematically analyze the limitations of DPO and existing algorithms designed to achieve the objectives stated above. To address these limitations, we propose Bounded-DPO (BDPO), a novel method that bounds the influence of rejected responses while maintaining the original optimization structure of DPO. Through theoretical analysis and empirical evaluations, we demonstrate that BDPO achieves a balanced optimization of the chosen and rejected responses, outperforming existing algorithms.
Anthology ID:
2025.findings-emnlp.433
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8159–8176
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.433/
DOI:
Bibkey:
Cite (ACL):
Jae Hyeon Cho, JunHyeok Oh, Myunsoo Kim, and Byung-Jun Lee. 2025. Rethinking DPO: The Role of Rejected Responses in Preference Misalignment. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 8159–8176, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Rethinking DPO: The Role of Rejected Responses in Preference Misalignment (Cho et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.433.pdf
Checklist:
 2025.findings-emnlp.433.checklist.pdf