Direct Preference Optimization for Neural Machine Translation with Minimum Bayes Risk Decoding

Guangyu Yang, Jinghong Chen, Weizhe Lin, Bill Byrne


Abstract
Minimum Bayes Risk (MBR) decoding can significantly improve translation performance of Multilingual Large Language Models (MLLMs). However, MBR decoding is computationally expensive. We show how the recently developed Reinforcement Learning technique, Direct Preference Optimization (DPO), can fine-tune MLLMs to get the gains of MBR without any additional computation in inference. Our method uses only a small monolingual fine-tuning set and yields significantly improved performance on multiple NMT test sets compared to MLLMs without DPO.
Anthology ID:
2024.naacl-short.34
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
391–398
Language:
URL:
https://aclanthology.org/2024.naacl-short.34
DOI:
10.18653/v1/2024.naacl-short.34
Bibkey:
Cite (ACL):
Guangyu Yang, Jinghong Chen, Weizhe Lin, and Bill Byrne. 2024. Direct Preference Optimization for Neural Machine Translation with Minimum Bayes Risk Decoding. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 391–398, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Direct Preference Optimization for Neural Machine Translation with Minimum Bayes Risk Decoding (Yang et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-short.34.pdf