With a Little Help from the Authors: Reproducing Human Evaluation of an MT Error Detector

Ondrej Platek, Mateusz Lango, Ondrej Dusek


Abstract
This work presents our efforts to reproduce the results of the human evaluation experiment presented in the paper of Vamvas and Sennrich (2022), which evaluated an automatic system detecting over- and undertranslations (translations containing more or less information than the original) in machine translation (MT) outputs. Despite the high quality of the documentation and code provided by the authors, we discuss some problems we found in reproducing the exact experimental setup and offer recommendations for improving reproducibility. Our replicated results generally confirm the conclusions of the original study, but in some cases statistically significant differences were observed, suggesting a high variability of human annotation.
Anthology ID:
2023.humeval-1.13
Volume:
Proceedings of the 3rd Workshop on Human Evaluation of NLP Systems
Month:
September
Year:
2023
Address:
Varna, Bulgaria
Editors:
Anya Belz, Maja Popović, Ehud Reiter, Craig Thomson, João Sedoc
Venues:
HumEval | WS
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
145–152
Language:
URL:
https://aclanthology.org/2023.humeval-1.13
DOI:
Bibkey:
Cite (ACL):
Ondrej Platek, Mateusz Lango, and Ondrej Dusek. 2023. With a Little Help from the Authors: Reproducing Human Evaluation of an MT Error Detector. In Proceedings of the 3rd Workshop on Human Evaluation of NLP Systems, pages 145–152, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
With a Little Help from the Authors: Reproducing Human Evaluation of an MT Error Detector (Platek et al., HumEval-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.humeval-1.13.pdf