BiasWipe: Mitigating Unintended Bias in Text Classifiers through Model Interpretability

Mamta Mamta, Rishikant Chigrupaatii, Asif Ekbal


Abstract
Toxic content detection plays a vital role in addressing the misuse of social media platforms to harm people or groups due to their race, gender or ethnicity. However, due to the nature of the datasets, systems develop an unintended bias due to the over-generalization of the model to the training data. This compromises the fairness of the systems, which can impact certain groups due to their race, gender, etc.Existing methods mitigate bias using data augmentation, adversarial learning, etc., which require re-training and adding extra parameters to the model.In this work, we present a robust and generalizable technique BiasWipe to mitigate unintended bias in language models. BiasWipe utilizes model interpretability using Shapley values, which achieve fairness by pruning the neuron weights responsible for unintended bias. It first identifies the neuron weights responsible for unintended bias and then achieves fairness by pruning them without loss of original performance. It does not require re-training or adding extra parameters to the model. To show the effectiveness of our proposed technique for bias unlearning, we perform extensive experiments for Toxic content detection for BERT, RoBERTa, and GPT models. .
Anthology ID:
2024.emnlp-main.1172
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21059–21070
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1172
DOI:
Bibkey:
Cite (ACL):
Mamta Mamta, Rishikant Chigrupaatii, and Asif Ekbal. 2024. BiasWipe: Mitigating Unintended Bias in Text Classifiers through Model Interpretability. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 21059–21070, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
BiasWipe: Mitigating Unintended Bias in Text Classifiers through Model Interpretability (Mamta et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1172.pdf