Robustness Tests for Automatic Machine Translation Metrics with Adversarial Attacks

Yichen Huang, Timothy Baldwin


Abstract
We investigate MT evaluation metric performance on adversarially-synthesized texts, to shed light on metric robustness. We experiment with word- and character-level attacks on three popular machine translation metrics: BERTScore, BLEURT, and COMET. Our human experiments validate that automatic metrics tend to overpenalize adversarially-degraded translations. We also identify inconsistencies in BERTScore ratings, where it judges the original sentence and the adversarially-degraded one as similar, while judging the degraded translation as notably worse than the original with respect to the reference. We identify patterns of brittleness that motivate more robust metric development.
Anthology ID:
2023.findings-emnlp.340
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5126–5135
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.340
DOI:
10.18653/v1/2023.findings-emnlp.340
Bibkey:
Cite (ACL):
Yichen Huang and Timothy Baldwin. 2023. Robustness Tests for Automatic Machine Translation Metrics with Adversarial Attacks. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5126–5135, Singapore. Association for Computational Linguistics.
Cite (Informal):
Robustness Tests for Automatic Machine Translation Metrics with Adversarial Attacks (Huang & Baldwin, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.340.pdf