Exploiting Primacy Effect to Improve Large Language Models

Bianca Raimondi, Maurizio Gabbrielli


Abstract
Large Language Models (LLMs) have become essential in many Natural Language Processing (NLP) tasks, leveraging extensive pre-training and fine-tuning to achieve high accuracy. However, like humans, LLMs exhibit biases, particularly positional biases such as primacy and recency effects, which can influence the accuracy of the answers. The primacy effect—where items presented first are more likely to be remembered or selected—plays a key role in Multiple Choice Question Answering (MCQA), where the order of answer options can affect prediction outcomes. This study focuses on primacy bias in fine-tuned LLMs: We first show that fine-tuning amplifies this bias, probably due to exposure to human-like patterns. Hence, we strategically leverage this effect, by reordering response options on the basis of semantic similarity to the query - without requiring knowledge of the correct answer. Our experimental results show that this approach significantly improves performance in MCQA. More generally, our findings underscore the dual nature of biases as both challenges and opportunities, offering insights for bias-aware model design and NLP applications.
Anthology ID:
2025.ranlp-1.113
Volume:
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Month:
September
Year:
2025
Address:
Varna, Bulgaria
Editors:
Galia Angelova, Maria Kunilovskaya, Marie Escribe, Ruslan Mitkov
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
989–997
Language:
URL:
https://aclanthology.org/2025.ranlp-1.113/
DOI:
Bibkey:
Cite (ACL):
Bianca Raimondi and Maurizio Gabbrielli. 2025. Exploiting Primacy Effect to Improve Large Language Models. In Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era, pages 989–997, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Exploiting Primacy Effect to Improve Large Language Models (Raimondi & Gabbrielli, RANLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.ranlp-1.113.pdf