Language Adaptation of Large Language Models: An Empirical Study on LLaMA2

Shumin Wang, Yuexiang Xie, Bolin Ding, Jinyang Gao, Yanyong Zhang


Abstract
There has been a surge of interest regarding language adaptation of Large Language Models (LLMs) to enhance the processing of texts in low-resource languages. While traditional language models have seen extensive research on language transfer, modern LLMs still necessitate further explorations in language adaptation. In this paper, we present a systematic review of the language adaptation process for LLMs, including vocabulary expansion, continued pre-training, and instruction fine-tuning, which focuses on empirical studies conducted on LLaMA2 and discussions on various settings affecting the model’s capabilities. This study provides helpful insights covering the entire language adaptation process, and highlights the compatibility and interactions between different steps, offering researchers a practical guidebook to facilitate the effective adaptation of LLMs across different languages.
Anthology ID:
2025.coling-main.480
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7195–7208
Language:
URL:
https://aclanthology.org/2025.coling-main.480/
DOI:
Bibkey:
Cite (ACL):
Shumin Wang, Yuexiang Xie, Bolin Ding, Jinyang Gao, and Yanyong Zhang. 2025. Language Adaptation of Large Language Models: An Empirical Study on LLaMA2. In Proceedings of the 31st International Conference on Computational Linguistics, pages 7195–7208, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Language Adaptation of Large Language Models: An Empirical Study on LLaMA2 (Wang et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.480.pdf