Jiayu Liao


2025

pdf bib
TEaR: Improving LLM-based Machine Translation with Systematic Self-Refinement
Zhaopeng Feng | Yan Zhang | Hao Li | Bei Wu | Jiayu Liao | Wenqiang Liu | Jun Lang | Yang Feng | Jian Wu | Zuozhu Liu
Findings of the Association for Computational Linguistics: NAACL 2025

Large Language Models (LLMs) have achieved impressive results in Machine Translation (MT). However, human evaluations reveal that LLM-generated translations still contain various errors. Notably, feeding the error information back into the LLMs can facilitate self-refinement, leading to enhanced translation quality. Motivated by these findings, we introduce TEaR (Translate, Estimate, and Refine), a systematic LLM-based self-refinement framework aimed at bootstrapping translation performance. Our key results show that: 1) TEaR framework enables LLMs to improve their translation quality relying solely on self-feedback, measured by both automatic metrics and Multidimensional Quality Metrics (MQM) scores; 2) TEaR autonomously selects improvements, ensuring a robust translation quality baseline while outperforming both internal refinement and external feedback methods. Error analysis and iterative refinement experiments show its ability to continuously reduce translation errors and enhance overall translation quality. Our code and data are publicly available at https://github.com/fzp0424/self_correct_mt.