Toward More Effective Human Evaluation for Machine Translation

Belén Saldías Fuentes, George Foster, Markus Freitag, Qijun Tan


Abstract
Improvements in text generation technologies such as machine translation have necessitated more costly and time-consuming human evaluation procedures to ensure an accurate signal. We investigate a simple way to reduce cost by reducing the number of text segments that must be annotated in order to accurately predict a score for a complete test set. Using a sampling approach, we demonstrate that information from document membership and automatic metrics can help improve estimates compared to a pure random sampling baseline. We achieve gains of up to 20% in average absolute error by leveraging stratified sampling and control variates. Our techniques can improve estimates made from a fixed annotation budget, are easy to implement, and can be applied to any problem with structure similar to the one we study.
Anthology ID:
2022.humeval-1.7
Volume:
Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Anya Belz, Maja Popović, Ehud Reiter, Anastasia Shimorina
Venue:
HumEval
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
76–89
Language:
URL:
https://aclanthology.org/2022.humeval-1.7
DOI:
10.18653/v1/2022.humeval-1.7
Bibkey:
Cite (ACL):
Belén Saldías Fuentes, George Foster, Markus Freitag, and Qijun Tan. 2022. Toward More Effective Human Evaluation for Machine Translation. In Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval), pages 76–89, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Toward More Effective Human Evaluation for Machine Translation (Saldías Fuentes et al., HumEval 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.humeval-1.7.pdf
Video:
 https://aclanthology.org/2022.humeval-1.7.mp4
Data
WMT 2020