Unveiling Selection Biases: Exploring Order and Token Sensitivity in Large Language Models

Sheng-Lun Wei, Cheng-Kuang Wu, Hen-Hsen Huang, Hsin-Hsi Chen


Abstract
In this paper, we investigate the phenomena of “selection biases” in Large Language Models (LLMs), focusing on problems where models are tasked with choosing the optimal option from an ordered sequence. We delve into biases related to option order and token usage, which significantly impact LLMs’ decision-making processes. We also quantify the impact of these biases through an extensive empirical analysis across multiple models and tasks. Furthermore, we propose mitigation strategies to enhance model performance. Our key contributions are threefold: 1) Precisely quantifying the influence of option order and token on LLMs, 2) Developing strategies to mitigate the impact of token and order sensitivity to enhance robustness, and 3) Offering a detailed analysis of sensitivity across models and tasks, which informs the creation of more stable and reliable LLM applications for selection problems.
Anthology ID:
2024.findings-acl.333
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5598–5621
Language:
URL:
https://aclanthology.org/2024.findings-acl.333
DOI:
Bibkey:
Cite (ACL):
Sheng-Lun Wei, Cheng-Kuang Wu, Hen-Hsen Huang, and Hsin-Hsi Chen. 2024. Unveiling Selection Biases: Exploring Order and Token Sensitivity in Large Language Models. In Findings of the Association for Computational Linguistics ACL 2024, pages 5598–5621, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Unveiling Selection Biases: Exploring Order and Token Sensitivity in Large Language Models (Wei et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.333.pdf