Are Automatic Metrics Robust and Reliable in Specific Machine Translation Tasks?

Mara Chinea-Rios, Alvaro Peris, Francisco Casacuberta


Abstract
We present a comparison of automatic metrics against human evaluations of translation quality in several scenarios which were unexplored up to now. Our experimentation was conducted on translation hypotheses that were problematic for the automatic metrics, as the results greatly diverged from one metric to another. We also compared three different translation technologies. Our evaluation shows that in most cases, the metrics capture the human criteria. However, we face failures of the automatic metrics when applied to some domains and systems. Interestingly, we find that automatic metrics applied to the neural machine translation hypotheses provide the most reliable results. Finally, we provide some advice when dealing with these problematic domains.
Anthology ID:
2018.eamt-main.9
Volume:
Proceedings of the 21st Annual Conference of the European Association for Machine Translation
Month:
May
Year:
2018
Address:
Alicante, Spain
Editors:
Juan Antonio Pérez-Ortiz, Felipe Sánchez-Martínez, Miquel Esplà-Gomis, Maja Popović, Celia Rico, André Martins, Joachim Van den Bogaert, Mikel L. Forcada
Venue:
EAMT
SIG:
Publisher:
Note:
Pages:
109–118
Language:
URL:
https://aclanthology.org/2018.eamt-main.9
DOI:
Bibkey:
Cite (ACL):
Mara Chinea-Rios, Alvaro Peris, and Francisco Casacuberta. 2018. Are Automatic Metrics Robust and Reliable in Specific Machine Translation Tasks?. In Proceedings of the 21st Annual Conference of the European Association for Machine Translation, pages 109–118, Alicante, Spain.
Cite (Informal):
Are Automatic Metrics Robust and Reliable in Specific Machine Translation Tasks? (Chinea-Rios et al., EAMT 2018)
Copy Citation:
PDF:
https://aclanthology.org/2018.eamt-main.9.pdf