Opportunities for Human-centered Evaluation of Machine Translation Systems

Daniel Liebling, Katherine Heller, Samantha Robertson, Wesley Deng


Abstract
Machine translation models are embedded in larger user-facing systems. Although model evaluation has matured, evaluation at the systems level is still lacking. We review literature from both the translation studies and HCI communities about who uses machine translation and for what purposes. We emphasize an important difference in evaluating machine translation models versus the physical and cultural systems in which they are embedded. We then propose opportunities for improved measurement of user-facing translation systems. We pay particular attention to the need for design and evaluation to aid engendering trust and enhancing user agency in future machine translation systems.
Anthology ID:
2022.findings-naacl.17
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
229–240
Language:
URL:
https://aclanthology.org/2022.findings-naacl.17
DOI:
10.18653/v1/2022.findings-naacl.17
Bibkey:
Cite (ACL):
Daniel Liebling, Katherine Heller, Samantha Robertson, and Wesley Deng. 2022. Opportunities for Human-centered Evaluation of Machine Translation Systems. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 229–240, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Opportunities for Human-centered Evaluation of Machine Translation Systems (Liebling et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-naacl.17.pdf
Data
MTOP