Data Sampling and (In)stability in Machine Translation Evaluation

Chi-kiu Lo, Rebecca Knowles


Abstract
We analyze the different data sampling approaches used in selecting data for human evaluation and ranking of machine translation systems at the highly influential Conference on Machine Translation (WMT). By using automatic evaluation metrics, we are able to focus on the impact of the data sampling procedure as separate from questions about human annotator consistency. We provide evidence that the latest data sampling approach used at WMT skews the annotated data toward shorter documents, not necessarily representative of the full test set. Lastly, we examine a new data sampling method that uses the available labour budget to sample data in a more representative manner, with the goals of improving representation of various document lengths in the sample and producing more stable rankings of system translation quality.
Anthology ID:
2023.findings-acl.826
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13064–13074
Language:
URL:
https://aclanthology.org/2023.findings-acl.826
DOI:
10.18653/v1/2023.findings-acl.826
Bibkey:
Cite (ACL):
Chi-kiu Lo and Rebecca Knowles. 2023. Data Sampling and (In)stability in Machine Translation Evaluation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 13064–13074, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Data Sampling and (In)stability in Machine Translation Evaluation (Lo & Knowles, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.826.pdf