Ehsan Barkhordar


2024

pdf bib
Which Side Are You On? Investigating Politico-Economic Bias in Nepali Language Models
Surendrabikram Thapa | Kritesh Rauniyar | Ehsan Barkhordar | Hariram Veeramani | Usman Naseem
Proceedings of the 22nd Annual Workshop of the Australasian Language Technology Association

Language models are trained on vast datasets sourced from the internet, which inevitably contain biases that reflect societal norms, stereotypes, and political inclinations. These biases can manifest in model outputs, influencing a wide range of applications. While there has been extensive research on bias detection and mitigation in large language models (LLMs) for widely spoken languages like English, there is a significant gap when it comes to low-resource languages such as Nepali. This paper addresses this gap by investigating the political and economic biases present in five fill-mask models and eleven generative models trained for the Nepali language. To assess these biases, we translated the Political Compass Test (PCT) into Nepali and evaluated the models’ outputs along social and economic axes. Our findings reveal distinct biases across models, with small LMs showing a right-leaning economic bias, while larger models exhibit more complex political orientations, including left-libertarian tendencies. This study emphasizes the importance of addressing biases in low-resource languages to promote fairness and inclusivity in AI-driven technologies. Our work provides a foundation for future research on bias detection and mitigation in underrepresented languages like Nepali, contributing to the broader goal of creating more ethical AI systems.

pdf bib
Why the Unexpected? Dissecting the Political and Economic Bias in Persian Small and Large Language Models
Ehsan Barkhordar | Surendrabikram Thapa | Ashwarya Maratha | Usman Naseem
Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024

Recently, language models (LMs) like BERT and large language models (LLMs) like GPT-4 have demonstrated potential in various linguistic tasks such as text generation, translation, and sentiment analysis. However, these abilities come with a cost of a risk of perpetuating biases from their training data. Political and economic inclinations play a significant role in shaping these biases. Thus, this research aims to understand political and economic biases in Persian LMs and LLMs, addressing a significant gap in AI ethics and fairness research. Focusing on the Persian language, our research employs a two-step methodology. First, we utilize the political compass test adapted to Persian. Second, we analyze biases present in these models. Our findings indicate the presence of nuanced biases, underscoring the importance of ethical considerations in AI deployments within Persian-speaking contexts.