Debiasing Embeddings for Reduced Gender Bias in Text Classification

Flavien Prost, Nithum Thain, Tolga Bolukbasi


Abstract
(Bolukbasi et al., 2016) demonstrated that pretrained word embeddings can inherit gender bias from the data they were trained on. We investigate how this bias affects downstream classification tasks, using the case study of occupation classification (De-Arteaga et al., 2019). We show that traditional techniques for debiasing embeddings can actually worsen the bias of the downstream classifier by providing a less noisy channel for communicating gender information. With a relatively minor adjustment, however, we show how these same techniques can be used to simultaneously reduce bias and maintain high classification accuracy.
Anthology ID:
W19-3810
Volume:
Proceedings of the First Workshop on Gender Bias in Natural Language Processing
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Marta R. Costa-jussà, Christian Hardmeier, Will Radford, Kellie Webster
Venue:
GeBNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
69–75
Language:
URL:
https://aclanthology.org/W19-3810
DOI:
10.18653/v1/W19-3810
Bibkey:
Cite (ACL):
Flavien Prost, Nithum Thain, and Tolga Bolukbasi. 2019. Debiasing Embeddings for Reduced Gender Bias in Text Classification. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 69–75, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Debiasing Embeddings for Reduced Gender Bias in Text Classification (Prost et al., GeBNLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-3810.pdf