Addressing Healthcare-related Racial and LGBTQ+ Biases in Pretrained Language Models

Sean Xie, Saeed Hassanpour, Soroush Vosoughi


Abstract
Recent studies have highlighted the issue of Pretrained Language Models (PLMs) inadvertently propagating social stigmas and stereotypes, a critical concern given their widespread use. This is particularly problematic in sensitive areas like healthcare, where such biases could lead to detrimental outcomes. Our research addresses this by adapting two intrinsic bias benchmarks to quantify racial and LGBTQ+ biases in prevalent PLMs. We also empirically evaluate the effectiveness of various debiasing methods in mitigating these biases. Furthermore, we assess the impact of debiasing on both Natural Language Understanding and specific biomedical applications. Our findings reveal that while PLMs commonly exhibit healthcare-related racial and LGBTQ+ biases, the applied debiasing techniques successfully reduce these biases without compromising the models’ performance in downstream tasks.
Anthology ID:
2024.findings-naacl.278
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4451–4464
Language:
URL:
https://aclanthology.org/2024.findings-naacl.278
DOI:
Bibkey:
Cite (ACL):
Sean Xie, Saeed Hassanpour, and Soroush Vosoughi. 2024. Addressing Healthcare-related Racial and LGBTQ+ Biases in Pretrained Language Models. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 4451–4464, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Addressing Healthcare-related Racial and LGBTQ+ Biases in Pretrained Language Models (Xie et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.278.pdf
Copyright:
 2024.findings-naacl.278.copyright.pdf