We Don’t Talk About That: Case Studies on Intersectional Analysis of Social Bias in Large Language Models

Hannah Devinney, Jenny Björklund, Henrik Björklund


Abstract
Despite concerns that Large Language Models (LLMs) are vectors for reproducing and amplifying social biases such as sexism, transphobia, islamophobia, and racism, there is a lack of work qualitatively analyzing how such patterns of bias are generated by LLMs. We use mixed-methods approaches and apply a feminist, intersectional lens to the problem across two language domains, Swedish and English, by generating narrative texts using LLMs. We find that hegemonic norms are consistently reproduced; dominant identities are often treated as ‘default’; and discussion of identity itself may be considered ‘inappropriate’ by the safety features applied to some LLMs. Due to the differing behaviors of models, depending both on their design and the language they are trained on, we observe that strategies of identifying “bias” must be adapted to individual models and their socio-cultural contexts._Content warning: This research concerns the identification of harms, including stereotyping, denigration, and erasure of minoritized groups. Examples, including transphobic and racist content, are included and discussed._
Anthology ID:
2024.gebnlp-1.3
Volume:
Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Agnieszka Faleńska, Christine Basta, Marta Costa-jussà, Seraphina Goldfarb-Tarrant, Debora Nozza
Venues:
GeBNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
33–44
Language:
URL:
https://aclanthology.org/2024.gebnlp-1.3
DOI:
Bibkey:
Cite (ACL):
Hannah Devinney, Jenny Björklund, and Henrik Björklund. 2024. We Don’t Talk About That: Case Studies on Intersectional Analysis of Social Bias in Large Language Models. In Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 33–44, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
We Don’t Talk About That: Case Studies on Intersectional Analysis of Social Bias in Large Language Models (Devinney et al., GeBNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.gebnlp-1.3.pdf