Evaluating the Output of Machine Translation Systems

Alon Lavie


Abstract
This half-day tutorial provides a broad overview of how to evaluate translations that are produced by machine translation systems. The range of issues covered includes a broad survey of both human evaluation measures and commonly-used automated metrics, and a review of how these are used for various types of evaluation tasks, such as assessing the translation quality of MT-translated sentences, comparing the performance of alternative MT systems, or measuring the productivity gains of incorporating MT into translation workflows.
Anthology ID:
2011.mtsummit-tutorials.3
Volume:
Proceedings of Machine Translation Summit XIII: Tutorial Abstracts
Month:
September 19
Year:
2011
Address:
Xiamen, China
Venue:
MTSummit
SIG:
Publisher:
Note:
Pages:
Language:
URL:
https://aclanthology.org/2011.mtsummit-tutorials.3
DOI:
Bibkey:
Cite (ACL):
Alon Lavie. 2011. Evaluating the Output of Machine Translation Systems. In Proceedings of Machine Translation Summit XIII: Tutorial Abstracts, Xiamen, China.
Cite (Informal):
Evaluating the Output of Machine Translation Systems (Lavie, MTSummit 2011)
Copy Citation:
PDF:
https://aclanthology.org/2011.mtsummit-tutorials.3.pdf