Taking MT Evaluation Metrics to Extremes: Beyond Correlation with Human Judgments

Marina Fomicheva, Lucia Specia


Abstract
Automatic Machine Translation (MT) evaluation is an active field of research, with a handful of new metrics devised every year. Evaluation metrics are generally benchmarked against manual assessment of translation quality, with performance measured in terms of overall correlation with human scores. Much work has been dedicated to the improvement of evaluation metrics to achieve a higher correlation with human judgments. However, little insight has been provided regarding the weaknesses and strengths of existing approaches and their behavior in different settings. In this work we conduct a broad meta-evaluation study of the performance of a wide range of evaluation metrics focusing on three major aspects. First, we analyze the performance of the metrics when faced with different levels of translation quality, proposing a local dependency measure as an alternative to the standard, global correlation coefficient. We show that metric performance varies significantly across different levels of MT quality: Metrics perform poorly when faced with low-quality translations and are not able to capture nuanced quality distinctions. Interestingly, we show that evaluating low-quality translations is also more challenging for humans. Second, we show that metrics are more reliable when evaluating neural MT than the traditional statistical MT systems. Finally, we show that the difference in the evaluation accuracy for different metrics is maintained even if the gold standard scores are based on different criteria.
Anthology ID:
J19-3004
Volume:
Computational Linguistics, Volume 45, Issue 3 - September 2019
Month:
September
Year:
2019
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
515–558
Language:
URL:
https://aclanthology.org/J19-3004
DOI:
10.1162/coli_a_00356
Bibkey:
Cite (ACL):
Marina Fomicheva and Lucia Specia. 2019. Taking MT Evaluation Metrics to Extremes: Beyond Correlation with Human Judgments. Computational Linguistics, 45(3):515–558.
Cite (Informal):
Taking MT Evaluation Metrics to Extremes: Beyond Correlation with Human Judgments (Fomicheva & Specia, CL 2019)
Copy Citation:
PDF:
https://aclanthology.org/J19-3004.pdf