Assessing Human-Parity in Machine Translation on the Segment Level

Yvette Graham, Christian Federmann, Maria Eskevich, Barry Haddow


Abstract
Recent machine translation shared tasks have shown top-performing systems to tie or in some cases even outperform human translation. Such conclusions about system and human performance are, however, based on estimates aggregated from scores collected over large test sets of translations and unfortunately leave some remaining questions unanswered. For instance, simply because a system significantly outperforms the human translator on average may not necessarily mean that it has done so for every translation in the test set. Firstly, are there remaining source segments present in evaluation test sets that cause significant challenges for top-performing systems and can such challenging segments go unnoticed due to the opacity of current human evaluation procedures? To provide insight into these issues we carefully inspect the outputs of top-performing systems in the most recent WMT-19 news translation shared task for all language pairs in which a system either tied or outperformed human translation. Our analysis provides a new method of identifying the remaining segments for which either machine or human perform poorly. For example, in our close inspection of WMT-19 English to German and German to English we discover the segments that disjointly proved a challenge for human and machine. For English to Russian, there were no segments included in our sample of translations that caused a significant challenge for the human translator, while we again identify the set of segments that caused issues for the top-performing system.
Anthology ID:
2020.findings-emnlp.375
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4199–4207
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.375
DOI:
10.18653/v1/2020.findings-emnlp.375
Bibkey:
Cite (ACL):
Yvette Graham, Christian Federmann, Maria Eskevich, and Barry Haddow. 2020. Assessing Human-Parity in Machine Translation on the Segment Level. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4199–4207, Online. Association for Computational Linguistics.
Cite (Informal):
Assessing Human-Parity in Machine Translation on the Segment Level (Graham et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.375.pdf
Optional supplementary material:
 2020.findings-emnlp.375.OptionalSupplementaryMaterial.pdf