LLMs in Post-Translation Workflows: Comparing Performance in Post-Editing and Error Analysis

Celia Uguet, Fred Bane, Mahmoud Aymo, João Torres, Anna Zaretskaya, Tània Blanch Miró Blanch Miró


Abstract
This study conducts a comprehensive comparison of three leading LLMs—GPT-4, Claude 3, and Gemini—in two translation-related tasks: automatic post-editing and MQM error annotation, across four languages. Utilizing the pharmaceutical EMEA corpus to maintain domain specificity and minimize data contamination, the research examines the models’ performance in these two tasks. Our findings reveal the nuanced capabilities of LLMs in handling MTPE and MQM tasks, hinting at the potential of these models in streamlining and optimizing translation workflows. Future directions include fine-tuning LLMs for task-specific improvements and exploring the integration of style guides for enhanced translation quality.
Anthology ID:
2024.eamt-1.32
Volume:
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)
Month:
June
Year:
2024
Address:
Sheffield, UK
Editors:
Carolina Scarton, Charlotte Prescott, Chris Bayliss, Chris Oakley, Joanna Wright, Stuart Wrigley, Xingyi Song, Edward Gow-Smith, Rachel Bawden, Víctor M Sánchez-Cartagena, Patrick Cadwell, Ekaterina Lapshinova-Koltunski, Vera Cabarrão, Konstantinos Chatzitheodorou, Mary Nurminen, Diptesh Kanojia, Helena Moniz
Venue:
EAMT
SIG:
Publisher:
European Association for Machine Translation (EAMT)
Note:
Pages:
373–386
Language:
URL:
https://aclanthology.org/2024.eamt-1.32
DOI:
Bibkey:
Cite (ACL):
Celia Uguet, Fred Bane, Mahmoud Aymo, João Torres, Anna Zaretskaya, and Tània Blanch Miró Blanch Miró. 2024. LLMs in Post-Translation Workflows: Comparing Performance in Post-Editing and Error Analysis. In Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1), pages 373–386, Sheffield, UK. European Association for Machine Translation (EAMT).
Cite (Informal):
LLMs in Post-Translation Workflows: Comparing Performance in Post-Editing and Error Analysis (Uguet et al., EAMT 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eamt-1.32.pdf