Evaluating the Elementary Multilingual Capabilities of Large Language Models with MultiQ

Carolin Holtermann, Paul Röttger, Timm Dill, Anne Lauscher


Abstract
Large language models (LLMs) need to serve everyone, including a global majority of non-English speakers. However, most LLMs today, and open LLMs in particular, are often intended for use in just English (e.g. Llama2, Mistral) or a small handful of high-resource languages (e.g. Mixtral, Qwen). Recent research shows that, despite limits in their intended use, people prompt LLMs in many different languages.Therefore, in this paper, we investigate the basic multilingual capabilities of state-of-the-art open LLMs beyond their intended use.For this purpose, we introduce MultiQ, a new silver standard benchmark for basic open-ended question answering with 27.4k test questions across a typologically diverse set of 137 languages. With MultiQ, we evaluate language fidelity, i.e. whether models respond in the prompted language, and question answering accuracy. All LLMs we test respond faithfully and/or accurately for at least some languages beyond their intended use. Most models are more accurate when they respond faithfully. However, differences across models are large, and there is a long tail of languages where models are neither accurate nor faithful. We explore differences in tokenization as a potential explanation for our findings, identifying possible correlations that warrant further investigation.
Anthology ID:
2024.findings-acl.265
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4476–4494
Language:
URL:
https://aclanthology.org/2024.findings-acl.265
DOI:
Bibkey:
Cite (ACL):
Carolin Holtermann, Paul Röttger, Timm Dill, and Anne Lauscher. 2024. Evaluating the Elementary Multilingual Capabilities of Large Language Models with MultiQ. In Findings of the Association for Computational Linguistics ACL 2024, pages 4476–4494, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Evaluating the Elementary Multilingual Capabilities of Large Language Models with MultiQ (Holtermann et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.265.pdf