Analyzing the Evaluation of Cross-Lingual Knowledge Transfer in Multilingual Language Models

Sara Rajaee, Christof Monz


Abstract
Recent advances in training multilingual language models on large datasets seem to have shown promising results in knowledge transfer across languages and achieve high performance on downstream tasks. However, we question to what extent the current evaluation benchmarks and setups accurately measure zero-shot cross-lingual knowledge transfer. In this work, we challenge the assumption that high zero-shot performance on target tasks reflects high cross-lingual ability by introducing more challenging setups involving instances with multiple languages. Through extensive experiments and analysis, we show that the observed high performance of multilingual models can be largely attributed to factors not requiring the transfer of actual linguistic knowledge, such as task- and surface-level knowledge. More specifically, we observe what has been transferred across languages is mostly data artifacts and biases, especially for low-resource languages. Our findings highlight the overlooked drawbacks of existing cross-lingual test data and evaluation setups, calling for a more nuanced understanding of the cross-lingual capabilities of multilingual models.
Anthology ID:
2024.eacl-long.177
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2895–2914
Language:
URL:
https://aclanthology.org/2024.eacl-long.177
DOI:
Bibkey:
Cite (ACL):
Sara Rajaee and Christof Monz. 2024. Analyzing the Evaluation of Cross-Lingual Knowledge Transfer in Multilingual Language Models. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2895–2914, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Analyzing the Evaluation of Cross-Lingual Knowledge Transfer in Multilingual Language Models (Rajaee & Monz, EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-long.177.pdf