Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens

Saad Hassan, Matt Huenerfauth, Cecilia Ovesdotter Alm


Abstract
Much of the world’s population experiences some form of disability during their lifetime. Caution must be exercised while designing natural language processing (NLP) systems to prevent systems from inadvertently perpetuating ableist bias against people with disabilities, i.e., prejudice that favors those with typical abilities. We report on various analyses based on word predictions of a large-scale BERT language model. Statistically significant results demonstrate that people with disabilities can be disadvantaged. Findings also explore overlapping forms of discrimination related to interconnected gender and race identities.
Anthology ID:
2021.findings-emnlp.267
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3116–3123
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.267
DOI:
10.18653/v1/2021.findings-emnlp.267
Bibkey:
Cite (ACL):
Saad Hassan, Matt Huenerfauth, and Cecilia Ovesdotter Alm. 2021. Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3116–3123, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens (Hassan et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.267.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.267.mp4