Guiding Large Language Models to Post-Edit Machine Translation with Error Annotations

Dayeon Ki, Marine Carpuat


Abstract
Machine Translation (MT) remains one of the last NLP tasks where large language models (LLMs) have not yet replaced dedicated supervised systems. This work exploits the complementary strengths of LLMs and supervised MT by guiding LLMs to automatically post-edit MT with external feedback on its quality, derived from Multidimensional Quality Metric (MQM) annotations. Working with LLaMA-2 models, we consider prompting strategies varying the nature of feedback provided and then fine-tune the LLM to improve its ability to exploit the provided guidance. Through experiments on Chinese-English, English-German, and English-Russian MQM data, we demonstrate that prompting LLMs to post-edit MT improves TER, BLEU and COMET scores, although the benefits of fine-grained feedback are not clear. Fine-tuning helps integrate fine-grained feedback more effectively and further improves translation quality based on both automatic and human evaluation.
Anthology ID:
2024.findings-naacl.265
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4253–4273
Language:
URL:
https://aclanthology.org/2024.findings-naacl.265
DOI:
Bibkey:
Cite (ACL):
Dayeon Ki and Marine Carpuat. 2024. Guiding Large Language Models to Post-Edit Machine Translation with Error Annotations. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 4253–4273, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Guiding Large Language Models to Post-Edit Machine Translation with Error Annotations (Ki & Carpuat, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.265.pdf
Copyright:
 2024.findings-naacl.265.copyright.pdf