Bias after Prompting: Persistent Discrimination in Large Language Models

Nivedha Sivakumar, Natalie Mackraz, Samira Khorshidi, Krishna Patel, Barry-John Theobald, Luca Zappella, Nicholas Apostoloff


Abstract
A dangerous assumption that can be made from prior work on the bias transfer hypothesis (BTH) is that biases do not transfer from pre-trained large language models (LLMs) to adapted models. We invalidate this assumption by studying the BTH in causal models under prompt adaptations, as prompting is an extremely popular and accessible adaptation strategy used in real-world applications. In contrast to prior work, we find that biases can transfer through prompting and that popular prompt-based mitigation methods do not consistently prevent biases from transferring. Specifically, the correlation between intrinsic biases and those after prompt adaptation remained moderate to strong across demographics and tasks: gender (rho >= 0.94) in co-reference resolution, and for age (rho >= 0.98), religion (rho >= 0.69), etc., in question answering. Further, we find that biases remain strongly correlated when varying few-shot composition parameters, such as sample size, stereotypical content, occupational distribution and representational balance (rho >= 0.90). We evaluate several prompt-based debiasing strategies and find that different approaches have distinct strengths, but none consistently reduce bias transfer across models, tasks or demographics. These results demonstrate that correcting bias, and potentially improving reasoning ability, in intrinsic models may be reliable ways to prevent propagation of biases to downstream tasks.
Anthology ID:
2025.findings-emnlp.1008
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18568–18593
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.1008/
DOI:
Bibkey:
Cite (ACL):
Nivedha Sivakumar, Natalie Mackraz, Samira Khorshidi, Krishna Patel, Barry-John Theobald, Luca Zappella, and Nicholas Apostoloff. 2025. Bias after Prompting: Persistent Discrimination in Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 18568–18593, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Bias after Prompting: Persistent Discrimination in Large Language Models (Sivakumar et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.1008.pdf
Checklist:
 2025.findings-emnlp.1008.checklist.pdf