Gendered Language in Resumes and its Implications for Algorithmic Bias in Hiring

Prasanna Parasurama, João Sedoc


Abstract
Despite growing concerns around gender bias in NLP models used in algorithmic hiring, there is little empirical work studying the extent and nature of gendered language in resumes. Using a corpus of 709k resumes from IT firms, we train a series of models to classify the gender of the applicant, thereby measuring the extent of gendered information encoded in resumes. We also investigate whether it is possible to obfuscate gender from resumes by removing gender identifiers, hobbies, gender sub-space in embedding models, etc. We find that there is a significant amount of gendered information in resumes even after obfuscation.A simple Tf-Idf model can learn to classify gender with AUROC=0.75, and more sophisticated transformer-based models achieve AUROC=0.8.We further find that gender predictive values have low correlation with gender direction of embeddings – meaning that, what is predictive of gender is much more than what is “gendered” in the masculine/feminine sense. We discuss the algorithmic bias and fairness implications of these findings in the hiring context.
Anthology ID:
2022.gebnlp-1.7
Volume:
Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Month:
July
Year:
2022
Address:
Seattle, Washington
Editors:
Christian Hardmeier, Christine Basta, Marta R. Costa-jussà, Gabriel Stanovsky, Hila Gonen
Venue:
GeBNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
74–74
Language:
URL:
https://aclanthology.org/2022.gebnlp-1.7
DOI:
10.18653/v1/2022.gebnlp-1.7
Bibkey:
Cite (ACL):
Prasanna Parasurama and João Sedoc. 2022. Gendered Language in Resumes and its Implications for Algorithmic Bias in Hiring. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 74–74, Seattle, Washington. Association for Computational Linguistics.
Cite (Informal):
Gendered Language in Resumes and its Implications for Algorithmic Bias in Hiring (Parasurama & Sedoc, GeBNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.gebnlp-1.7.pdf