Average Is Not Enough: Caveats of Multilingual Evaluation

Matúš Pikuliak, Marian Simko


Abstract
This position paper discusses the problem of multilingual evaluation. Using simple statistics, such as average language performance, might inject linguistic biases in favor of dominant language families into evaluation methodology. We argue that a qualitative analysis informed by comparative linguistics is needed for multilingual results to detect this kind of bias. We show in our case study that results in published works can indeed be linguistically biased and we demonstrate that visualization based on URIEL typological database can detect it.
Anthology ID:
2022.mrl-1.13
Volume:
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Duygu Ataman, Hila Gonen, Sebastian Ruder, Orhan Firat, Gözde Gül Sahin, Jamshidbek Mirzakhalov
Venue:
MRL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
125–133
Language:
URL:
https://aclanthology.org/2022.mrl-1.13
DOI:
10.18653/v1/2022.mrl-1.13
Bibkey:
Cite (ACL):
Matúš Pikuliak and Marian Simko. 2022. Average Is Not Enough: Caveats of Multilingual Evaluation. In Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL), pages 125–133, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Average Is Not Enough: Caveats of Multilingual Evaluation (Pikuliak & Simko, MRL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.mrl-1.13.pdf