Reproducibility Issues for BERT-based Evaluation Metrics

Yanran Chen, Jonas Belouadi, Steffen Eger


Abstract
Reproducibility is of utmost concern in machine learning and natural language processing (NLP). In the field of natural language generation (especially machine translation), the seminal paper of Post (2018) has pointed out problems of reproducibility of the dominant metric, BLEU, at the time of publication. Nowadays, BERT-based evaluation metrics considerably outperform BLEU. In this paper, we ask whether results and claims from four recent BERT-based metrics can be reproduced. We find that reproduction of claims and results often fails because of (i) heavy undocumented preprocessing involved in the metrics, (ii) missing code and (iii) reporting weaker results for the baseline metrics. (iv) In one case, the problem stems from correlating not to human scores but to a wrong column in the csv file, inflating scores by 5 points. Motivated by the impact of preprocessing, we then conduct a second study where we examine its effects more closely (for one of the metrics). We find that preprocessing can have large effects, especially for highly inflectional languages. In this case, the effect of preprocessing may be larger than the effect of the aggregation mechanism (e.g., greedy alignment vs. Word Mover Distance).
Anthology ID:
2022.emnlp-main.192
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2965–2989
Language:
URL:
https://aclanthology.org/2022.emnlp-main.192
DOI:
10.18653/v1/2022.emnlp-main.192
Bibkey:
Cite (ACL):
Yanran Chen, Jonas Belouadi, and Steffen Eger. 2022. Reproducibility Issues for BERT-based Evaluation Metrics. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2965–2989, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Reproducibility Issues for BERT-based Evaluation Metrics (Chen et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.192.pdf