Anusha Kamath


2025

pdf bib
Adapting Multilingual LLMs to Low-Resource Languages using Continued Pre-training and Synthetic Corpus: A Case Study for Hindi LLMs
Raviraj Joshi | Kanishk Singla | Anusha Kamath | Raunak Kalani | Rakesh Paul | Utkarsh Vaidya | Sanjay Singh Chauhan | Niranjan Wartikar | Eileen Long
Proceedings of the First Workshop on Natural Language Processing for Indo-Aryan and Dravidian Languages

Multilingual LLMs support a variety of languages; however, their performance is suboptimal for low-resource languages. In this work, we emphasize the importance of continued pre-training of multilingual LLMs and the use of translation-based synthetic pre-training corpora for improving LLMs in low-resource languages. We conduct our study in the context of the low-resource Indic language Hindi. We introduce Nemotron-Mini-Hindi 4B, a bilingual SLM supporting both Hindi and English, based on Nemotron-Mini 4B. The model is trained using a mix of real and synthetic Hindi + English tokens, with continuous pre-training performed on 400B tokens. We demonstrate that both the base and instruct models achieve state-of-the-art results on Hindi benchmarks while remaining competitive on English tasks. Additionally, we observe that the continued pre-training approach enhances the model’s overall factual accuracy.

2022

pdf bib
Robust Candidate Generation for Entity Linking on Short Social Media Texts
Liam Hebert | Raheleh Makki | Shubhanshu Mishra | Hamidreza Saghir | Anusha Kamath | Yuval Merhav
Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022)

Entity Linking (EL) is the gateway into Knowledge Bases. Recent advances in EL utilize dense retrieval approaches for Candidate Generation, which addresses some of the shortcomings of the Lookup based approach of matching NER mentions against pre-computed dictionaries. In this work, we show that in the domain of Tweets, such methods suffer as users often include informal spelling, limited context, and lack of specificity, among other issues. We investigate these challenges on a large and recent Tweets benchmark for EL, empirically evaluate lookup and dense retrieval approaches, and demonstrate a hybrid solution using long contextual representation from Wikipedia is necessary to achieve considerable gains over previous work, achieving 0.93 recall.