Disagreeable, Slovenly, Honest and Un-named Women? Investigating Gender Bias in English Educational Resources by Extending Existing Gender Bias Taxonomies

Haotian Zhu, Kexin Gao, Fei Xia, Mari Ostendorf


Abstract
Gender bias has been extensively studied in both the educational field and the Natural Language Processing (NLP) field, the former using human coding to identify patterns associated with and causes of gender bias in text and the latter to detect, measure and mitigate gender bias in NLP output and models. This work aims to use NLP to facilitate automatic, quantitative analysis of educational text within the framework of a gender bias taxonomy. Analyses of both educational texts and a lexical resource (WordNet) reveal patterns of bias that can inform and aid educators in updating textbooks and lexical resources and in designing assessment items.
Anthology ID:
2024.gebnlp-1.14
Volume:
Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Agnieszka Faleńska, Christine Basta, Marta Costa-jussà, Seraphina Goldfarb-Tarrant, Debora Nozza
Venues:
GeBNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
219–236
Language:
URL:
https://aclanthology.org/2024.gebnlp-1.14
DOI:
10.18653/v1/2024.gebnlp-1.14
Bibkey:
Cite (ACL):
Haotian Zhu, Kexin Gao, Fei Xia, and Mari Ostendorf. 2024. Disagreeable, Slovenly, Honest and Un-named Women? Investigating Gender Bias in English Educational Resources by Extending Existing Gender Bias Taxonomies. In Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 219–236, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Disagreeable, Slovenly, Honest and Un-named Women? Investigating Gender Bias in English Educational Resources by Extending Existing Gender Bias Taxonomies (Zhu et al., GeBNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.gebnlp-1.14.pdf