Sanjay Singh Chauhan
2025
Adapting Multilingual LLMs to Low-Resource Languages using Continued Pre-training and Synthetic Corpus: A Case Study for Hindi LLMs
Raviraj Joshi
|
Kanishk Singla
|
Anusha Kamath
|
Raunak Kalani
|
Rakesh Paul
|
Utkarsh Vaidya
|
Sanjay Singh Chauhan
|
Niranjan Wartikar
|
Eileen Long
Proceedings of the First Workshop on Natural Language Processing for Indo-Aryan and Dravidian Languages
Multilingual LLMs support a variety of languages; however, their performance is suboptimal for low-resource languages. In this work, we emphasize the importance of continued pre-training of multilingual LLMs and the use of translation-based synthetic pre-training corpora for improving LLMs in low-resource languages. We conduct our study in the context of the low-resource Indic language Hindi. We introduce Nemotron-Mini-Hindi 4B, a bilingual SLM supporting both Hindi and English, based on Nemotron-Mini 4B. The model is trained using a mix of real and synthetic Hindi + English tokens, with continuous pre-training performed on 400B tokens. We demonstrate that both the base and instruct models achieve state-of-the-art results on Hindi benchmarks while remaining competitive on English tasks. Additionally, we observe that the continued pre-training approach enhances the model’s overall factual accuracy.
Search
Fix data
Co-authors
- Raviraj Joshi 1
- Raunak Kalani 1
- Anusha Kamath 1
- Eileen Long 1
- Rakesh Paul 1
- show all...