A Study of Translation Edit Rate with Targeted Human Annotation

Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, John Makhoul


Abstract
We examine a new, intuitive measure for evaluating machine-translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments. Translation Edit Rate (TER) measures the amount of editing that a human would have to perform to change a system output so it exactly matches a reference translation. We show that the single-reference variant of TER correlates as well with human judgments of MT quality as the four-reference variant of BLEU. We also define a human-targeted TER (or HTER) and show that it yields higher correlations with human judgments than BLEU—even when BLEU is given human-targeted references. Our results indicate that HTER correlates with human judgments better than HMETEOR and that the four-reference variants of TER and HTER correlate with human judgments as well as—or better than—a second human judgment does.
Anthology ID:
2006.amta-papers.25
Volume:
Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers
Month:
August 8-12
Year:
2006
Address:
Cambridge, Massachusetts, USA
Venue:
AMTA
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
223–231
Language:
URL:
https://aclanthology.org/2006.amta-papers.25
DOI:
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2006.amta-papers.25.pdf