Steering Towards Fairness: Mitigating Political Stance Bias in LLMs

Afrozah Nadeem, Mark Dras, Usman Naseem


Abstract
Recent advancements in large language models (LLMs) have enabled their widespread use across diverse real-world applications. However, concerns remain about their tendency to encode and reproduce ideological biases along political and economic dimensions. In this paper, we employ a framework for probing and mitigating such biases in decoder-based LLMs through analysis of internal model representations. Grounded in the Political Compass Test (PCT), this method uses contrastive pairs to extract and compare hidden layer activations from models like Mistral and DeepSeek. We introduce a comprehensive activation extraction pipeline capable of layer-wise analysis across multiple ideological axes, revealing meaningful disparities linked to political framing. Our results show that decoder LLMs systematically encode representational bias across layers, which can be leveraged for effective steering vector-based mitigation. This work provides new insights into how political bias is encoded in LLMs and offers a principled approach to debiasing beyond surface-level output interventions.
Anthology ID:
2025.case-1.6
Volume:
Proceedings of the 8th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Texts
Month:
September
Year:
2025
Address:
Varna, Bulgaria
Editors:
Ali Hürriyetoğlu, Hristo Tanev, Surendrabikram Thapa
Venues:
CASE | WS
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
52–61
Language:
URL:
https://aclanthology.org/2025.case-1.6/
DOI:
Bibkey:
Cite (ACL):
Afrozah Nadeem, Mark Dras, and Usman Naseem. 2025. Steering Towards Fairness: Mitigating Political Stance Bias in LLMs. In Proceedings of the 8th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Texts, pages 52–61, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Steering Towards Fairness: Mitigating Political Stance Bias in LLMs (Nadeem et al., CASE 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.case-1.6.pdf