Abdullah Khan Zehady
2026
BanglaLlama: LLaMA for Bangla Language
Abdullah Khan Zehady | Shubhashis Roy Dipta | Naymul Islam | Safi Al Mamun | Santu Karmaker
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Abdullah Khan Zehady | Shubhashis Roy Dipta | Naymul Islam | Safi Al Mamun | Santu Karmaker
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Bangla is a language spoken by approximately 240 million native speakers and around 300 million people worldwide. Despite being the 5th largest spoken language in the world, Bangla is still a "low-resource" language, and existing pretrained language models often struggle to perform well on Bangla Language Processing (BLP) tasks. This paper addresses this gap by: (1) introducing two high-quality translated Bangla-instruction datasets totaling 224k samples – Bangla-Orca (172k) and Bangla-Alpaca (52k); and (2) leveraging these datasets to develop BanglaLlama, an open-source family of Bangla-specific LLMs, consisting of five base and instruct variants. We present our methodology, two large datasets, and comprehensive benchmarking results showcasing the effectiveness of our dataset and model on multiple benchmarks. We believe our proposed datasets and models will serve as the new standard baseline for future research focused on this widely spoken yet "low-resource" language.
2025
Read Between the Lines: A Benchmark for Uncovering Political Bias in Bangla News Articles
Nusrat Jahan Lia | Shubhashis Roy Dipta | Abdullah Khan Zehady | Naymul Islam | Madhusodan Chakraborty | Abdullah Al Wasif
Proceedings of the Second Workshop on Bangla Language Processing (BLP-2025)
Nusrat Jahan Lia | Shubhashis Roy Dipta | Abdullah Khan Zehady | Naymul Islam | Madhusodan Chakraborty | Abdullah Al Wasif
Proceedings of the Second Workshop on Bangla Language Processing (BLP-2025)
Detecting media bias is crucial, specifically in the South Asian region. Despite this, annotated datasets and computational studies for Bangla political bias research remain scarce. Crucially because, political stance detection in Bangla news requires understanding of linguistic cues, cultural context, subtle biases, rhetorical strategies, code-switching, implicit sentiment, and socio-political background. To address this, we introduce the first benchmark dataset of 200 politically significant and highly debated Bangla news articles, labeled for government-leaning, government-critique, and neutral stances, alongside diagnostic analyses for evaluating large language models (LLMs). Our comprehensive evaluation of 28 proprietary and open-source LLMs shows strong performance in detecting government-critique content (F1 up to 0.83) but substantial difficulty with neutral articles (F1 as low as 0.00). Models also tend to over-predict government-leaning stances, often misinterpreting ambiguous narratives. This dataset and its associated diagnostics provide a foundation for advancing stance detection in Bangla media research and offer insights for improving LLM performance in low-resource languages.