Luis Vasquez-Reina


2025

pdf bib
Cognitive Biases, Task Complexity, and Result Intepretability in Large Language Models
Mario Mina | Valle Ruiz-Fernández | Júlia Falcão | Luis Vasquez-Reina | Aitor Gonzalez-Agirre
Proceedings of the 31st International Conference on Computational Linguistics

In humans, cognitive biases are systematic deviations from rationality in judgment that simplify complex decisions. They typically manifest as a consequence of learned behaviors or limitations on information processing capabilities. Recent work has shown that these biases can percolate through training data and ultimately be learned by language models. We examine different groups of models, factoring in model size and type (base or instructed) for four kinds of cognitive bias: primacy, recency, common token, and majority class bias. We evaluate the performance of each model for each type of bias in different settings using simple and complex variants of datasets. Our results show that some biases have much stronger effects than others, and that task complexity plays a part in eliciting stronger effects for some of these biases as measured by effect size. We show that some cognitive biases such as common token and majority class bias are not straightforward to evaluate, and that, contrary to some of the previous literature, some effects that have been previously classified as common token bias in the literature are actually due to primacy and recency bias.