Robustness and Reliability of Gender Bias Assessment in Word Embeddings: The Role of Base Pairs

Haiyang Zhang, Alison Sneyd, Mark Stevenson


Abstract
It has been shown that word embeddings can exhibit gender bias, and various methods have been proposed to quantify this. However, the extent to which the methods are capturing social stereotypes inherited from the data has been debated. Bias is a complex concept and there exist multiple ways to define it. Previous work has leveraged gender word pairs to measure bias and extract biased analogies. We show that the reliance on these gendered pairs has strong limitations: bias measures based off of them are not robust and cannot identify common types of real-world bias, whilst analogies utilising them are unsuitable indicators of bias. In particular, the well-known analogy “man is to computer-programmer as woman is to homemaker” is due to word similarity rather than bias. This has important implications for work on measuring bias in embeddings and related work debiasing embeddings.
Anthology ID:
2020.aacl-main.76
Volume:
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing
Month:
December
Year:
2020
Address:
Suzhou, China
Venue:
AACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
759–769
Language:
URL:
https://aclanthology.org/2020.aacl-main.76
DOI:
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2020.aacl-main.76.pdf