An Empirical Study on the Fairness of Pre-trained Word Embeddings

Emeralda Sesari, Max Hort, Federica Sarro


Abstract
Pre-trained word embedding models are easily distributed and applied, as they alleviate users from the effort to train models themselves. With widely distributed models, it is important to ensure that they do not exhibit undesired behaviour, such as biases against population groups. For this purpose, we carry out an empirical study on evaluating the bias of 15 publicly available, pre-trained word embeddings model based on three training algorithms (GloVe, word2vec, and fastText) with regard to four bias metrics (WEAT, SEMBIAS,DIRECT BIAS, and ECT). The choice of word embedding models and bias metrics is motivated by a literature survey over 37 publications which quantified bias on pre-trained word embeddings. Our results indicate that fastText is the least biased model (in 8 out of 12 cases) and small vector lengths lead to a higher bias.
Anthology ID:
2022.gebnlp-1.15
Volume:
Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Month:
July
Year:
2022
Address:
Seattle, Washington
Venue:
GeBNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
129–144
Language:
URL:
https://aclanthology.org/2022.gebnlp-1.15
DOI:
10.18653/v1/2022.gebnlp-1.15
Bibkey:
Cite (ACL):
Emeralda Sesari, Max Hort, and Federica Sarro. 2022. An Empirical Study on the Fairness of Pre-trained Word Embeddings. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 129–144, Seattle, Washington. Association for Computational Linguistics.
Cite (Informal):
An Empirical Study on the Fairness of Pre-trained Word Embeddings (Sesari et al., GeBNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.gebnlp-1.15.pdf
Video:
 https://aclanthology.org/2022.gebnlp-1.15.mp4