Sanjana Gautam
2024
Blind Spots and Biases: Exploring the Role of Annotator Cognitive Biases in NLP
Sanjana Gautam
|
Mukund Srinath
Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing
With the rapid proliferation of artificial intelligence, there is growing concern over its potential to exacerbate existing biases and societal disparities and introduce novel ones. This issue has prompted widespread attention from academia, policymakers, industry, and civil society. While evidence suggests that integrating human perspectives can mitigate bias-related issues in AI systems, it also introduces challenges associated with cognitive biases inherent in human decision-making. Our research focuses on reviewing existing methodologies and ongoing investigations aimed at understanding annotation attributes that contribute to bias.
2023
The Sentiment Problem: A Critical Survey towards Deconstructing Sentiment Analysis
Pranav Venkit
|
Mukund Srinath
|
Sanjana Gautam
|
Saranya Venkatraman
|
Vipul Gupta
|
Rebecca Passonneau
|
Shomir Wilson
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
We conduct an inquiry into the sociotechnical aspects of sentiment analysis (SA) by critically examining 189 peer-reviewed papers on their applications, models, and datasets. Our investigation stems from the recognition that SA has become an integral component of diverse sociotechnical systems, exerting influence on both social and technical users. By delving into sociological and technological literature on sentiment, we unveil distinct conceptualizations of this term in domains such as finance, government, and medicine. Our study exposes a lack of explicit definitions and frameworks for characterizing sentiment, resulting in potential challenges and biases. To tackle this issue, we propose an ethics sheet encompassing critical inquiries to guide practitioners in ensuring equitable utilization of SA. Our findings underscore the significance of adopting an interdisciplinary approach to defining sentiment in SA and offer a pragmatic solution for its implementation.
Nationality Bias in Text Generation
Pranav Narayanan Venkit
|
Sanjana Gautam
|
Ruchi Panchanadikar
|
Ting-Hao Huang
|
Shomir Wilson
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Little attention is placed on analyzing nationality bias in language models, especially when nationality is highly used as a factor in increasing the performance of social NLP models. This paper examines how a text generation model, GPT-2, accentuates pre-existing societal biases about country-based demonyms. We generate stories using GPT-2 for various nationalities and use sensitivity analysis to explore how the number of internet users and the country’s economic status impacts the sentiment of the stories. To reduce the propagation of biases through large language models (LLM), we explore the debiasing method of adversarial triggering. Our results show that GPT-2 demonstrates significant bias against countries with lower internet users, and adversarial triggering effectively reduces the same.
Search
Co-authors
- Pranav Narayanan Venkit 2
- Mukund Srinath 2
- Shomir Wilson 2
- Saranya Venkatraman 1
- Vipul Gupta 1
- show all...