Post-edits Are Preferences Too

Nathaniel Berger, Stefan Riezler, Miriam Exel, Matthias Huck


Abstract
Preference Optimization (PO) techniques are currently one of the state of the art techniques for fine-tuning large language models (LLMs) on pairwise preference feedback from human annotators. However, in machine translation, this sort of feedback can be difficult to solicit. Additionally, Kreuzer et al. (2018) have shown that, for machine translation, pairwise preferences are less reliable than other forms of human feedback, such as 5-point ratings.We examine post-edits to see if they can be a source of reliable human preferences by construction. In PO, a human annotator is shown sequences $s_1$ and $s_2$ and asked for a preference judgment, while for post-editing, editors create $s_1$ and know that it should be better than $s_2$. We attempt to use these implicit preferences for PO and show that it helps the model move towards post-edit like hypotheses and away from machine translation-like hypotheses. Furthermore, we show that best results are obtained by pre-training the model with supervised fine-tuning (SFT) on post-edits in order to promote post-edit like hypotheses to the top output ranks.
Anthology ID:
2024.wmt-1.122
Volume:
Proceedings of the Ninth Conference on Machine Translation
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Barry Haddow, Tom Kocmi, Philipp Koehn, Christof Monz
Venue:
WMT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1289–1300
Language:
URL:
https://aclanthology.org/2024.wmt-1.122
DOI:
Bibkey:
Cite (ACL):
Nathaniel Berger, Stefan Riezler, Miriam Exel, and Matthias Huck. 2024. Post-edits Are Preferences Too. In Proceedings of the Ninth Conference on Machine Translation, pages 1289–1300, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Post-edits Are Preferences Too (Berger et al., WMT 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.wmt-1.122.pdf