Direct Multi-Turn Preference Optimization for Language Agents

Wentao Shi, Mengqi Yuan, Junkang Wu, Qifan Wang, Fuli Feng


Abstract
Adapting Large Language Models (LLMs) for agent tasks is critical in developing language agents. Direct Preference Optimization (DPO) is a promising technique for this adaptation with the alleviation of compounding errors, offering a means to directly optimize Reinforcement Learning (RL) objectives. However, applying DPO to multi-turn tasks presents challenges due to the inability to cancel the partition function. Overcoming this obstacle involves making the partition function independent of the current state and addressing length disparities between preferred and dis-preferred trajectories. In this light, we replace the policy constraint with the state-action occupancy measure constraint in the RL objective and add length normalization to the Bradley-Terry model, yielding a novel loss function named DMPO for multi-turn agent tasks with theoretical explanations. Extensive experiments on three multi-turn agent task datasets confirm the effectiveness and superiority of the DMPO loss.
Anthology ID:
2024.emnlp-main.138
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2312–2324
Language:
URL:
https://aclanthology.org/2024.emnlp-main.138
DOI:
Bibkey:
Cite (ACL):
Wentao Shi, Mengqi Yuan, Junkang Wu, Qifan Wang, and Fuli Feng. 2024. Direct Multi-Turn Preference Optimization for Language Agents. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2312–2324, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Direct Multi-Turn Preference Optimization for Language Agents (Shi et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.138.pdf