Investigating the Helpfulness of Word-Level Quality Estimation for Post-Editing Machine Translation Output

Raksha Shenoy, Nico Herbig, Antonio Krüger, Josef van Genabith


Abstract
Compared to fully manual translation, post-editing (PE) machine translation (MT) output can save time and reduce errors. Automatic word-level quality estimation (QE) aims to predict the correctness of words in MT output and holds great promise to aid PE by flagging problematic output. Quality of QE is crucial, as incorrect QE might lead to translators missing errors or wasting time on already correct MT output. Achieving accurate automatic word-level QE is very hard, and it is currently not known (i) at what quality threshold QE is actually beginning to be useful for human PE, and (ii), how to best present word-level QE information to translators. In particular, should word-level QE visualization indicate uncertainty of the QE model or not? In this paper, we address both research questions with real and simulated word-level QE, visualizations, and user studies, where time, subjective ratings, and quality of the final translations are assessed. Results show that current word-level QE models are not yet good enough to support PE. Instead, quality levels of > 80% F1 are required. For helpful quality levels, a visualization reflecting the uncertainty of the QE model is preferred. Our analysis further shows that speed gains achieved through QE are not merely a result of blindly trusting the QE system, but that the quality of the final translations also improves. The threshold results from the paper establish a quality goal for future word-level QE research.
Anthology ID:
2021.emnlp-main.799
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10173–10185
Language:
URL:
https://aclanthology.org/2021.emnlp-main.799
DOI:
10.18653/v1/2021.emnlp-main.799
Bibkey:
Cite (ACL):
Raksha Shenoy, Nico Herbig, Antonio Krüger, and Josef van Genabith. 2021. Investigating the Helpfulness of Word-Level Quality Estimation for Post-Editing Machine Translation Output. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10173–10185, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Investigating the Helpfulness of Word-Level Quality Estimation for Post-Editing Machine Translation Output (Shenoy et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.799.pdf
Software:
 2021.emnlp-main.799.Software.zip
Video:
 https://aclanthology.org/2021.emnlp-main.799.mp4
Code
 nicoherbig/mmpe