Detecting Unintended Social Bias in Toxic Language Datasets

Nihar Sahoo, Himanshu Gupta, Pushpak Bhattacharyya


Abstract
With the rise of online hate speech, automatic detection of Hate Speech, Offensive texts as a natural language processing task is getting popular. However, very little research has been done to detect unintended social bias from these toxic language datasets. This paper introduces a new dataset ToxicBias curated from the existing dataset of Kaggle competition named “Jigsaw Unintended Bias in Toxicity Classification”. We aim to detect social biases, their categories, and targeted groups. The dataset contains instances annotated for five different bias categories, viz., gender, race/ethnicity, religion, political, and LGBTQ. We train transformer-based models using our curated datasets and report baseline performance for bias identification, target generation, and bias implications. Model biases and their mitigation are also discussed in detail. Our study motivates a systematic extraction of social bias data from toxic language datasets.
Anthology ID:
2022.conll-1.10
Volume:
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Antske Fokkens, Vivek Srikumar
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
132–143
Language:
URL:
https://aclanthology.org/2022.conll-1.10
DOI:
10.18653/v1/2022.conll-1.10
Bibkey:
Cite (ACL):
Nihar Sahoo, Himanshu Gupta, and Pushpak Bhattacharyya. 2022. Detecting Unintended Social Bias in Toxic Language Datasets. In Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL), pages 132–143, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Detecting Unintended Social Bias in Toxic Language Datasets (Sahoo et al., CoNLL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.conll-1.10.pdf