Unintended Bias Detection and Mitigation in Misogynous Memes

Gitanjali Kumari, Anubhav Sinha, Asif Ekbal


Abstract
Online sexism has become a concerning issue in recent years, especially conveyed through memes. Although this alarming phenomenon has triggered many studies from computational linguistic and natural language processing points of view, less effort has been spent analyzing if those misogyny detection models are affected by an unintended bias. Such biases can lead models to incorrectly label non-misogynous memes misogynous due to specific identity terms, perpetuating harmful stereotypes and reinforcing negative attitudes. This paper presents the first and most comprehensive approach to measure and mitigate unintentional bias in the misogynous memes detection model, aiming to develop effective strategies to counter their harmful impact. Our proposed model, the Contextualized Scene Graph-based Multimodal Network (CTXSGMNet), is an integrated architecture that combines VisualBERT, a CLIP-LSTM-based memory network, and an unbiased scene graph module with supervised contrastive loss, achieves state-of-the-art performance in mitigating unintentional bias in misogynous memes.Empirical evaluation, including both qualitative and quantitative analysis, demonstrates the effectiveness of our CTXSGMNet framework on the SemEval-2022 Task 5 (MAMI task) dataset, showcasing its promising performance in terms of Equity of Odds and F1 score. Additionally, we assess the generalizability of the proposed model by evaluating their performance on a few benchmark meme datasets, providing a comprehensive understanding of our approach’s efficacy across diverse datasets.
Anthology ID:
2024.eacl-long.166
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2719–2733
Language:
URL:
https://aclanthology.org/2024.eacl-long.166
DOI:
Bibkey:
Cite (ACL):
Gitanjali Kumari, Anubhav Sinha, and Asif Ekbal. 2024. Unintended Bias Detection and Mitigation in Misogynous Memes. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2719–2733, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Unintended Bias Detection and Mitigation in Misogynous Memes (Kumari et al., EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-long.166.pdf