Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception

Luyang Lin, Lingzhi Wang, Jinsong Guo, Kam-Fai Wong


Abstract
The pervasive spread of misinformation and disinformation in social media underscores the critical importance of detecting media bias. While robust Large Language Models (LLMs) have emerged as foundational tools for bias prediction, concerns about inherent biases within these models persist. In this work, we investigate the presence and nature of bias within LLMs and its consequential impact on media bias detection. Departing from conventional approaches that focus solely on bias detection in media content, we delve into biases within the LLM systems themselves. Through meticulous examination, we probe whether LLMs exhibit biases, particularly in political bias prediction and text continuation tasks. Additionally, we explore bias across diverse topics, aiming to uncover nuanced variations in bias expression within the LLM framework. Importantly, we propose debiasing strategies, including prompt engineering and model fine-tuning. Extensive analysis of bias tendencies across different LLMs sheds light on the broader landscape of bias propagation in language models. This study advances our understanding of LLM bias, offering critical insights into its implications for bias detection tasks and paving the way for more robust and equitable AI systems
Anthology ID:
2025.coling-main.709
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10634–10649
Language:
URL:
https://aclanthology.org/2025.coling-main.709/
DOI:
Bibkey:
Cite (ACL):
Luyang Lin, Lingzhi Wang, Jinsong Guo, and Kam-Fai Wong. 2025. Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception. In Proceedings of the 31st International Conference on Computational Linguistics, pages 10634–10649, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception (Lin et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.709.pdf