Trained MT Metrics Learn to Cope with Machine-translated References

Jannis Vamvas, Tobias Domhan, Sony Trenous, Rico Sennrich, Eva Hasler


Abstract
Neural metrics trained on human evaluations of MT tend to correlate well with human judgments, but their behavior is not fully understood. In this paper, we perform a controlled experiment and compare a baseline metric that has not been trained on human evaluations (Prism) to a trained version of the same metric (Prism+FT). Surprisingly, we find that Prism+FT becomes more robust to machine-translated references, which are a notorious problem in MT evaluation. This suggests that the effects of metric training go beyond the intended effect of improving overall correlation with human judgments.
Anthology ID:
2023.wmt-1.95
Volume:
Proceedings of the Eighth Conference on Machine Translation
Month:
December
Year:
2023
Address:
Singapore
Editors:
Philipp Koehn, Barry Haddow, Tom Kocmi, Christof Monz
Venue:
WMT
SIG:
SIGMT
Publisher:
Association for Computational Linguistics
Note:
Pages:
983–995
Language:
URL:
https://aclanthology.org/2023.wmt-1.95
DOI:
10.18653/v1/2023.wmt-1.95
Bibkey:
Cite (ACL):
Jannis Vamvas, Tobias Domhan, Sony Trenous, Rico Sennrich, and Eva Hasler. 2023. Trained MT Metrics Learn to Cope with Machine-translated References. In Proceedings of the Eighth Conference on Machine Translation, pages 983–995, Singapore. Association for Computational Linguistics.
Cite (Informal):
Trained MT Metrics Learn to Cope with Machine-translated References (Vamvas et al., WMT 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.wmt-1.95.pdf
Video:
 https://aclanthology.org/2023.wmt-1.95.mp4