Evaluation Briefs: Drawing on Translation Studies for Human Evaluation of MT

Ting Liu, Chi-kiu Lo, Elizabeth Marshman, Rebecca Knowles


Abstract
In this position paper, we examine ways in which researchers in machine translation and translation studies have approached the problem of evaluating the output of machine translation systems and, more broadly, the questions of what it means to define translation quality. We explore their similarities and differences, highlighting the role that the purpose and context of translation plays in translation studies approaches. We argue that evaluation of machine translation (e.g., in shared tasks) would benefit from additional insights from translation studies, and we suggest the introduction of an ‘evaluation brief” (analogous to the ‘translation brief’) which could help set out useful context for annotators tasked with evaluating machine translation.
Anthology ID:
2024.amta-research.17
Volume:
Proceedings of the 16th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)
Month:
September
Year:
2024
Address:
Chicago, USA
Editors:
Rebecca Knowles, Akiko Eriguchi, Shivali Goel
Venue:
AMTA
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
190–208
Language:
URL:
https://aclanthology.org/2024.amta-research.17
DOI:
Bibkey:
Cite (ACL):
Ting Liu, Chi-kiu Lo, Elizabeth Marshman, and Rebecca Knowles. 2024. Evaluation Briefs: Drawing on Translation Studies for Human Evaluation of MT. In Proceedings of the 16th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 190–208, Chicago, USA. Association for Machine Translation in the Americas.
Cite (Informal):
Evaluation Briefs: Drawing on Translation Studies for Human Evaluation of MT (Liu et al., AMTA 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.amta-research.17.pdf