Mitigating Gender Bias via Fostering Exploratory Thinking in LLMs

Kangda Wei, Hasnat Md Abdullah, Ruihong Huang


Abstract
Large Language Models (LLMs) often exhibit gender bias, resulting in unequal treatment of male and female subjects across different contexts. To address this issue, we propose a novel data generation framework that fosters exploratory thinking in LLMs. Our approach prompts models to generate story pairs featuring male and female protagonists in structurally identical, morally ambiguous scenarios, then elicits and compares their moral judgments. When inconsistencies arise, the model is guided to produce balanced, gender-neutral judgments. These story-judgment pairs are used to fine-tune or optimize the models via Direct Preference Optimization (DPO). Experimental results show that our method significantly reduces gender bias while preserving or even enhancing general model capabilities. We will release the code and generated data.
Anthology ID:
2025.findings-emnlp.364
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6895–6917
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.364/
DOI:
Bibkey:
Cite (ACL):
Kangda Wei, Hasnat Md Abdullah, and Ruihong Huang. 2025. Mitigating Gender Bias via Fostering Exploratory Thinking in LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 6895–6917, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Mitigating Gender Bias via Fostering Exploratory Thinking in LLMs (Wei et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.364.pdf
Checklist:
 2025.findings-emnlp.364.checklist.pdf