A Bayesian Approach to Uncertainty in Word Embedding Bias Estimation

Alicja Dobrzeniecka, Rafal Urbaniak


Abstract
Multiple measures, such as WEAT or MAC, attempt to quantify the magnitude of bias present in word embeddings in terms of a single-number metric. However, such metrics and the related statistical significance calculations rely on treating pre-averaged data as individual data points and utilizing bootstrapping techniques with low sample sizes. We show that similar results can be easily obtained using such methods even if the data are generated by a null model lacking the intended bias. Consequently, we argue that this approach generates false confidence. To address this issue, we propose a Bayesian alternative: hierarchical Bayesian modeling, which enables a more uncertainty-sensitive inspection of bias in word embeddings at different levels of granularity. To showcase our method, we apply it to Religion, Gender, and Race word lists from the original research, together with our control neutral word lists. We deploy the method using Google, GloVe, and Reddit embeddings. Further, we utilize our approach to evaluate a debiasing technique applied to the Reddit word embedding. Our findings reveal a more complex landscape than suggested by the proponents of single-number metrics. The datasets and source code for the paper are publicly available.1
Anthology ID:
2024.cl-2.4
Volume:
Computational Linguistics, Volume 50, Issue 2 - June 2023
Month:
June
Year:
2024
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
563–617
Language:
URL:
https://aclanthology.org/2024.cl-2.4
DOI:
10.1162/coli_a_00507
Bibkey:
Cite (ACL):
Alicja Dobrzeniecka and Rafal Urbaniak. 2024. A Bayesian Approach to Uncertainty in Word Embedding Bias Estimation. Computational Linguistics, 50(2):563–617.
Cite (Informal):
A Bayesian Approach to Uncertainty in Word Embedding Bias Estimation (Dobrzeniecka & Urbaniak, CL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.cl-2.4.pdf