Detecting Independent Pronoun Bias with Partially-Synthetic Data Generation

Robert Munro, Alex (Carmen) Morrison


Abstract
We report that state-of-the-art parsers consistently failed to identify “hers” and “theirs” as pronouns but identified the masculine equivalent “his”. We find that the same biases exist in recent language models like BERT. While some of the bias comes from known sources, like training data with gender imbalances, we find that the bias is _amplified_ in the language models and that linguistic differences between English pronouns that are not inherently biased can become biases in some machine learning models. We introduce a new technique for measuring bias in models, using Bayesian approximations to generate partially-synthetic data from the model itself.
Anthology ID:
2020.emnlp-main.157
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2011–2017
Language:
URL:
https://aclanthology.org/2020.emnlp-main.157
DOI:
10.18653/v1/2020.emnlp-main.157
Bibkey:
Cite (ACL):
Robert Munro and Alex (Carmen) Morrison. 2020. Detecting Independent Pronoun Bias with Partially-Synthetic Data Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2011–2017, Online. Association for Computational Linguistics.
Cite (Informal):
Detecting Independent Pronoun Bias with Partially-Synthetic Data Generation (Munro & Morrison, EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.157.pdf
Video:
 https://slideslive.com/38938714