Direct Preference Optimization with an Offset

Afra Amini, Tim Vieira, Ryan Cotterell


Abstract
Direct preference optimization (DPO) is a successful fine-tuning strategy for aligning large language models with human preferences without the need to train a reward model or employ reinforcement learning. DPO, as originally formulated, relies on binary preference data and fine-tunes a language model to increase the likelihood of a preferred response over a dispreferred response. However, not all preference pairs are equal. Sometimes, the preferred response is only slightly better than the dispreferred one. In other cases, the preference is much stronger. For instance, if a response contains harmful or toxic content, the annotator will have a strong preference for that response. In this paper, we propose a generalization of DPO, termed DPO with an offset (ODPO), that does not treat every preference pair equally during fine-tuning. Intuitively, ODPO requires the difference between the likelihood of the preferred and dispreferred response to be greater than an offset value. The offset is determined based on the extent to which one response is preferred over another. Our experiments on various tasks suggest that ODPO significantly outperforms DPO in aligning language models, especially when the number of preference pairs is limited.
Anthology ID:
2024.findings-acl.592
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9954–9972
Language:
URL:
https://aclanthology.org/2024.findings-acl.592
DOI:
Bibkey:
Cite (ACL):
Afra Amini, Tim Vieira, and Ryan Cotterell. 2024. Direct Preference Optimization with an Offset. In Findings of the Association for Computational Linguistics ACL 2024, pages 9954–9972, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Direct Preference Optimization with an Offset (Amini et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.592.pdf