BLEURT Has Universal Translations: An Analysis of Automatic Metrics by Minimum Risk Training

Yiming Yan, Tao Wang, Chengqi Zhao, Shujian Huang, Jiajun Chen, Mingxuan Wang


Abstract
Automatic metrics play a crucial role in machine translation. Despite the widespread use of n-gram-based metrics, there has been a recent surge in the development of pre-trained model-based metrics that focus on measuring sentence semantics. However, these neural metrics, while achieving higher correlations with human evaluations, are often considered to be black boxes with potential biases that are difficult to detect. In this study, we systematically analyze and compare various mainstream and cutting-edge automatic metrics from the perspective of their guidance for training machine translation systems. Through Minimum Risk Training (MRT), we find that certain metrics exhibit robustness defects, such as the presence of universal adversarial translations in BLEURT and BARTScore. In-depth analysis suggests two main causes of these robustness deficits: distribution biases in the training datasets, and the tendency of the metric paradigm. By incorporating token-level constraints, we enhance the robustness of evaluation metrics, which in turn leads to an improvement in the performance of machine translation systems. Codes are available at https://github.com/powerpuffpomelo/fairseq_mrt.
Anthology ID:
2023.acl-long.297
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5428–5443
Language:
URL:
https://aclanthology.org/2023.acl-long.297
DOI:
10.18653/v1/2023.acl-long.297
Bibkey:
Cite (ACL):
Yiming Yan, Tao Wang, Chengqi Zhao, Shujian Huang, Jiajun Chen, and Mingxuan Wang. 2023. BLEURT Has Universal Translations: An Analysis of Automatic Metrics by Minimum Risk Training. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5428–5443, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
BLEURT Has Universal Translations: An Analysis of Automatic Metrics by Minimum Risk Training (Yan et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.297.pdf
Video:
 https://aclanthology.org/2023.acl-long.297.mp4