Characterizing Positional Bias in Large Language Models: A Multi-Model Evaluation of Prompt Order Effects

Patrick Schilcher, Dominik Karasin, Michael Schöpf, Haisam Saleh, Antonela Tommasel, Markus Schedl


Abstract
Large Language Models (LLMs) are widely used for a variety of tasks such as text generation, ranking, and decision-making. However, their outputs can be influenced by various forms of biases. One such bias is positional bias, where models prioritize items based on their position within a given prompt rather than their content or quality, impacting on how LLMs interpret and weigh information, potentially compromising fairness, reliability, and robustness. To assess positional bias, we prompt a range of LLMs to generate descriptions for a list of topics, systematically permuting their order and analyzing variations in the responses. Our analysis shows that ranking position affects structural features and coherence, with some LLMs also reordering or omitting topics. Nonetheless, the impact of positional bias varies across different LLMs and topics, indicating an interplay with other related biases.
Anthology ID:
2025.findings-emnlp.1124
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20643–20664
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.1124/
DOI:
Bibkey:
Cite (ACL):
Patrick Schilcher, Dominik Karasin, Michael Schöpf, Haisam Saleh, Antonela Tommasel, and Markus Schedl. 2025. Characterizing Positional Bias in Large Language Models: A Multi-Model Evaluation of Prompt Order Effects. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 20643–20664, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Characterizing Positional Bias in Large Language Models: A Multi-Model Evaluation of Prompt Order Effects (Schilcher et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.1124.pdf
Checklist:
 2025.findings-emnlp.1124.checklist.pdf