Aligning Neural Machine Translation Models: Human Feedback in Training and Inference

Miguel Ramos, Patrick Fernandes, António Farinhas, Andre Martins


Abstract
Reinforcement learning from human feedback (RLHF) is a recent technique to improve the quality of the text generated by a language model, making it closer to what humans would generate.A core ingredient in RLHF’s success in aligning and improving large language models (LLMs) is its reward model, trained using human feedback on model outputs. In machine translation (MT), where metrics trained from human annotations can readily be used as reward models, recent methods using minimum Bayes risk decoding and reranking have succeeded in improving the final quality of translation.In this study, we comprehensively explore and compare techniques for integrating quality metrics as reward models into the MT pipeline. This includes using the reward model for data filtering, during the training phase through RL, and at inference time by employing reranking techniques, and we assess the effects of combining these in a unified approach.Our experimental results, conducted across multiple translation tasks, underscore the crucial role of effective data filtering, based on estimated quality, in harnessing the full potential of RL in enhancing MT quality.Furthermore, our findings demonstrate the effectiveness of combining RL training with reranking techniques, showcasing substantial improvements in translation quality.
Anthology ID:
2024.eamt-1.22
Volume:
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)
Month:
June
Year:
2024
Address:
Sheffield, UK
Editors:
Carolina Scarton, Charlotte Prescott, Chris Bayliss, Chris Oakley, Joanna Wright, Stuart Wrigley, Xingyi Song, Edward Gow-Smith, Rachel Bawden, Víctor M Sánchez-Cartagena, Patrick Cadwell, Ekaterina Lapshinova-Koltunski, Vera Cabarrão, Konstantinos Chatzitheodorou, Mary Nurminen, Diptesh Kanojia, Helena Moniz
Venue:
EAMT
SIG:
Publisher:
European Association for Machine Translation (EAMT)
Note:
Pages:
258–274
Language:
URL:
https://aclanthology.org/2024.eamt-1.22
DOI:
Bibkey:
Cite (ACL):
Miguel Ramos, Patrick Fernandes, António Farinhas, and Andre Martins. 2024. Aligning Neural Machine Translation Models: Human Feedback in Training and Inference. In Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1), pages 258–274, Sheffield, UK. European Association for Machine Translation (EAMT).
Cite (Informal):
Aligning Neural Machine Translation Models: Human Feedback in Training and Inference (Ramos et al., EAMT 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eamt-1.22.pdf