Amogh Thakurdesai
2025
Non-Contextual BERT or FastText? A Comparative Analysis
Abhay Shanbhag
|
Suramya Jadhav
|
Amogh Thakurdesai
|
Ridhima Bhaskar Sinare
|
Raviraj Joshi
Proceedings of the Workshop on Beyond English: Natural Language Processing for all Languages in an Era of Large Language Models
Natural Language Processing (NLP) for low-resource languages, which lack large annotated datasets, faces significant challenges due to limited high-quality data and linguistic resources. The selection of embeddings plays a critical role in achieving strong performance in NLP tasks. While contextual BERT embeddings require a full forward pass, non-contextual BERT embeddings rely only on table lookup. Existing research has primarily focused on contextual BERT embeddings, leaving non-contextual embeddings largely unexplored. In this study, we analyze the effectiveness of non-contextual embeddings from BERT models (MuRIL and MahaBERT) and FastText models (IndicFT and MahaFT) for tasks such as news classification, sentiment analysis, and hate speech detection in one such low-resource language—Marathi. We compare these embeddings with their contextual and compressed variants. Our findings indicate that non-contextual BERT embeddings extracted from the model’s first embedding layer outperform FastText embeddings, presenting a promising alternative for low-resource NLP.
On Limitations of LLM as Annotator for Low Resource Languages
Suramya Jadhav
|
Abhay Shanbhag
|
Amogh Thakurdesai
|
Ridhima Sinare
|
Raviraj Joshi
Proceedings of the 8th International Conference on Natural Language and Speech Processing (ICNLSP-2025)