Large Language Model Recall Uncertainty is Modulated by the Fan Effect

Jesse Roberts, Kyle Moore, Douglas Fisher, Oseremhen Ewaleifoh, Thao Pham


Abstract
This paper evaluates whether large language models (LLMs) exhibit cognitive fan effects, similar to those discovered by Anderson in humans, after being pre-trained on human textual data. We conduct two sets of in-context recall experiments designed to elicit fan effects. Consistent with human results, we find that LLM recall uncertainty, measured via token probability, is influenced by the fan effect. Our results show that removing uncertainty disrupts the observed effect. The experiments suggest the fan effect is consistent whether the fan value is induced in-context or in the pre-training data. Finally, these findings provide in-silico evidence that fan effects and typicality are expressions of the same phenomena.
Anthology ID:
2024.conll-1.24
Volume:
Proceedings of the 28th Conference on Computational Natural Language Learning
Month:
November
Year:
2024
Address:
Miami, FL, USA
Editors:
Libby Barak, Malihe Alikhani
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
303–313
Language:
URL:
https://aclanthology.org/2024.conll-1.24
DOI:
Bibkey:
Cite (ACL):
Jesse Roberts, Kyle Moore, Douglas Fisher, Oseremhen Ewaleifoh, and Thao Pham. 2024. Large Language Model Recall Uncertainty is Modulated by the Fan Effect. In Proceedings of the 28th Conference on Computational Natural Language Learning, pages 303–313, Miami, FL, USA. Association for Computational Linguistics.
Cite (Informal):
Large Language Model Recall Uncertainty is Modulated by the Fan Effect (Roberts et al., CoNLL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.conll-1.24.pdf