Could We Have Had Better Multilingual LLMs if English Was Not the Central Language?

Ryandito Diandaru, Lucky Susanto, Zilu Tang, Ayu Purwarianti, Derry Tanti Wijaya


Abstract
Large Language Models (LLMs) demonstrate strong machine translation capabilities on languages they are trained on. However, the impact of factors beyond training data size on translation performance remains a topic of debate, especially concerning languages not directly encountered during training. Our study delves into Llama2’s translation capabilities. By modeling a linear relationship between linguistic feature distances and machine translation scores, we ask ourselves if there are potentially better central languages for LLMs other than English. Our experiments show that the 7B Llama2 model yields above 10 BLEU when translating into all languages it has seen, which rarely happens for languages it has not seen. Most translation improvements into unseen languages come from scaling up the model size rather than instruction tuning or increasing shot count. Furthermore, our correlation analysis reveals that syntactic similarity is not the only linguistic factor that strongly correlates with machine translation scores. Interestingly, we discovered that under specific circumstances, some languages (e.g. Swedish, Catalan), despite having significantly less training data, exhibit comparable correlation levels to English. These insights challenge the prevailing landscape of LLMs, suggesting that models centered around languages other than English could provide a more efficient foundation for multilingual applications.
Anthology ID:
2024.tdle-1.4
Volume:
Proceedings of the Second International Workshop Towards Digital Language Equality (TDLE): Focusing on Sustainability @ LREC-COLING 2024
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Federico Gaspari, Joss Moorkens, Itziar Aldabe, Aritz Farwell, Begona Altuna, Stelios Piperidis, Georg Rehm, German Rigau
Venues:
TDLE | WS
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
43–52
Language:
URL:
https://aclanthology.org/2024.tdle-1.4
DOI:
Bibkey:
Cite (ACL):
Ryandito Diandaru, Lucky Susanto, Zilu Tang, Ayu Purwarianti, and Derry Tanti Wijaya. 2024. Could We Have Had Better Multilingual LLMs if English Was Not the Central Language?. In Proceedings of the Second International Workshop Towards Digital Language Equality (TDLE): Focusing on Sustainability @ LREC-COLING 2024, pages 43–52, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Could We Have Had Better Multilingual LLMs if English Was Not the Central Language? (Diandaru et al., TDLE-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.tdle-1.4.pdf