What Drives Performance in Multilingual Language Models?

Sina Bagheri Nezhad, Ameeta Agrawal


Abstract
This study investigates the factors influencing the performance of multilingual large language models (MLLMs) across diverse languages. We study 6 MLLMs, including masked language models, autoregressive models, and instruction-tuned LLMs, on the SIB-200 dataset, a topic classification dataset encompassing 204 languages. Our analysis considers three scenarios: ALL languages, SEEN languages (present in the model’s pretraining data), and UNSEEN languages (not present or documented in the model’s pretraining data in any meaningful way). We examine the impact of factors such as pretraining data size, general resource availability, language family, and script type on model performance. Decision tree analysis reveals that pretraining data size is the most influential factor for SEEN languages. However, interestingly, script type and language family become more crucial for UNSEEN languages, highlighting the importance of cross-lingual transfer learning. Notably, model size and architecture do not significantly alter the most important features identified. Our findings provide valuable insights into the strengths and limitations of current MLLMs and hope to guide the development of more effective and equitable multilingual NLP systems.
Anthology ID:
2024.vardial-1.2
Volume:
Proceedings of the Eleventh Workshop on NLP for Similar Languages, Varieties, and Dialects (VarDial 2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Yves Scherrer, Tommi Jauhiainen, Nikola Ljubešić, Marcos Zampieri, Preslav Nakov, Jörg Tiedemann
Venues:
VarDial | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16–27
Language:
URL:
https://aclanthology.org/2024.vardial-1.2
DOI:
10.18653/v1/2024.vardial-1.2
Bibkey:
Cite (ACL):
Sina Bagheri Nezhad and Ameeta Agrawal. 2024. What Drives Performance in Multilingual Language Models?. In Proceedings of the Eleventh Workshop on NLP for Similar Languages, Varieties, and Dialects (VarDial 2024), pages 16–27, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
What Drives Performance in Multilingual Language Models? (Bagheri Nezhad & Agrawal, VarDial-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.vardial-1.2.pdf
Supplementary material:
 2024.vardial-1.2.SupplementaryMaterial.txt