A global analysis of metrics used for measuring performance in natural language processing

Kathrin Blagec, Georg Dorffner, Milad Moradi, Simon Ott, Matthias Samwald


Abstract
Measuring the performance of natural language processing models is challenging. Traditionally used metrics, such as BLEU and ROUGE, originally devised for machine translation and summarization, have been shown to suffer from low correlation with human judgment and a lack of transferability to other tasks and languages. In the past 15 years, a wide range of alternative metrics have been proposed. However, it is unclear to what extent this has had an impact on NLP benchmarking efforts. Here we provide the first large-scale cross-sectional analysis of metrics used for measuring performance in natural language processing. We curated, mapped and systematized more than 3500 machine learning model performance results from the open repository ‘Papers with Code’ to enable a global and comprehensive analysis. Our results suggest that the large majority of natural language processing metrics currently used have properties that may result in an inadequate reflection of a models’ performance. Furthermore, we found that ambiguities and inconsistencies in the reporting of metrics may lead to difficulties in interpreting and comparing model performances, impairing transparency and reproducibility in NLP research.
Anthology ID:
2022.nlppower-1.6
Volume:
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Tatiana Shavrina, Vladislav Mikhailov, Valentin Malykh, Ekaterina Artemova, Oleg Serikov, Vitaly Protasov
Venue:
nlppower
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
52–63
Language:
URL:
https://aclanthology.org/2022.nlppower-1.6
DOI:
10.18653/v1/2022.nlppower-1.6
Bibkey:
Cite (ACL):
Kathrin Blagec, Georg Dorffner, Milad Moradi, Simon Ott, and Matthias Samwald. 2022. A global analysis of metrics used for measuring performance in natural language processing. In Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP, pages 52–63, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
A global analysis of metrics used for measuring performance in natural language processing (Blagec et al., nlppower 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.nlppower-1.6.pdf
Video:
 https://aclanthology.org/2022.nlppower-1.6.mp4
Code
 OpenBioLink/ITO