Fair Embedding Engine: A Library for Analyzing and Mitigating Gender Bias in Word Embeddings

Vaibhav Kumar, Tenzin Bhotia, Vaibhav Kumar


Abstract
Non-contextual word embedding models have been shown to inherit human-like stereotypical biases of gender, race and religion from the training corpora. To counter this issue, a large body of research has emerged which aims to mitigate these biases while keeping the syntactic and semantic utility of embeddings intact. This paper describes Fair Embedding Engine (FEE), a library for analysing and mitigating gender bias in word embeddings. FEE combines various state of the art techniques for quantifying, visualising and mitigating gender bias in word embeddings under a standard abstraction. FEE will aid practitioners in fast track analysis of existing debiasing methods on their embedding models. Further, it will allow rapid prototyping of new methods by evaluating their performance on a suite of standard metrics.
Anthology ID:
2020.nlposs-1.5
Volume:
Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)
Month:
November
Year:
2020
Address:
Online
Editors:
Eunjeong L. Park, Masato Hagiwara, Dmitrijs Milajevs, Nelson F. Liu, Geeticka Chauhan, Liling Tan
Venue:
NLPOSS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26–31
Language:
URL:
https://aclanthology.org/2020.nlposs-1.5
DOI:
10.18653/v1/2020.nlposs-1.5
Bibkey:
Cite (ACL):
Vaibhav Kumar, Tenzin Bhotia, and Vaibhav Kumar. 2020. Fair Embedding Engine: A Library for Analyzing and Mitigating Gender Bias in Word Embeddings. In Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS), pages 26–31, Online. Association for Computational Linguistics.
Cite (Informal):
Fair Embedding Engine: A Library for Analyzing and Mitigating Gender Bias in Word Embeddings (Kumar et al., NLPOSS 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.nlposs-1.5.pdf
Video:
 https://slideslive.com/38939742
Code
 FEE-Fair-Embedding-Engine/FEE