Uncovering Implicit Gender Bias in Narratives through Commonsense Inference

Tenghao Huang, Faeze Brahman, Vered Shwartz, Snigdha Chaturvedi


Abstract
Pre-trained language models learn socially harmful biases from their training corpora, and may repeat these biases when used for generation. We study gender biases associated with the protagonist in model-generated stories. Such biases may be expressed either explicitly (“women can’t park”) or implicitly (e.g. an unsolicited male character guides her into a parking space). We focus on implicit biases, and use a commonsense reasoning engine to uncover them. Specifically, we infer and analyze the protagonist’s motivations, attributes, mental states, and implications on others. Our findings regarding implicit biases are in line with prior work that studied explicit biases, for example showing that female characters’ portrayal is centered around appearance, while male figures’ focus on intellect.
Anthology ID:
2021.findings-emnlp.326
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venues:
EMNLP | Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3866–3873
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.326
DOI:
10.18653/v1/2021.findings-emnlp.326
Bibkey:
Cite (ACL):
Tenghao Huang, Faeze Brahman, Vered Shwartz, and Snigdha Chaturvedi. 2021. Uncovering Implicit Gender Bias in Narratives through Commonsense Inference. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3866–3873, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Uncovering Implicit Gender Bias in Narratives through Commonsense Inference (Huang et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.326.pdf
Code
 tenghaohuang/uncover_implicit_bias