Marcos V Treviso


2024

pdf bib
xTower: A Multilingual LLM for Explaining and Correcting Translation Errors
Marcos V Treviso | Nuno M Guerreiro | Sweta Agrawal | Ricardo Rei | José Pombal | Tania Vaz | Helena Wu | Beatriz Silva | Daan Van Stigt | Andre Martins
Findings of the Association for Computational Linguistics: EMNLP 2024

While machine translation (MT) systems are achieving increasingly strong performance on benchmarks, they often produce translations with errors and anomalies. Understanding these errors can potentially help improve the translation quality and user experience. This paper introduces xTower, an open large language model (LLM) built on top of TowerBase designed to provide free-text explanations for translation errors in order to guide the generation of a corrected translation. The quality of the generated explanations by xTower are assessed via both intrinsic and extrinsic evaluation. We ask expert translators to evaluate the quality of the explanations across two dimensions: relatedness towards the error span being explained and helpfulness in error understanding and improving translation quality. Extrinsically, we test xTower across various experimental setups in generating translation corrections, demonstrating significant improvements in translation quality. Our findings highlight xTower’s potential towards not only producing plausible and helpful explanations of automatic translations, but also leveraging them to suggest corrected translations.