Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function

Yusu Qian, Urwa Muaz, Ben Zhang, Jae Won Hyun


Abstract
Gender bias exists in natural language datasets, which neural language models tend to learn, resulting in biased text generation. In this research, we propose a debiasing approach based on the loss function modification. We introduce a new term to the loss function which attempts to equalize the probabilities of male and female words in the output. Using an array of bias evaluation metrics, we provide empirical evidence that our approach successfully mitigates gender bias in language models without increasing perplexity. In comparison to existing debiasing strategies, data augmentation, and word embedding debiasing, our method performs better in several aspects, especially in reducing gender bias in occupation words. Finally, we introduce a combination of data augmentation and our approach and show that it outperforms existing strategies in all bias evaluation metrics.
Anthology ID:
P19-2031
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Fernando Alva-Manchego, Eunsol Choi, Daniel Khashabi
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
223–228
Language:
URL:
https://aclanthology.org/P19-2031
DOI:
10.18653/v1/P19-2031
Bibkey:
Cite (ACL):
Yusu Qian, Urwa Muaz, Ben Zhang, and Jae Won Hyun. 2019. Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 223–228, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function (Qian et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-2031.pdf
Code
 additional community code