Probing Large Language Models from a Human Behavioral Perspective

Xintong Wang, Xiaoyu Li, Xingshan Li, Chris Biemann


Abstract
Large Language Models (LLMs) have emerged as dominant foundational models in modern NLP. However, the understanding of their prediction processes and internal mechanisms, such as feed-forward networks (FFN) and multi-head self-attention (MHSA), remains largely unexplored. In this work, we probe LLMs from a human behavioral perspective, correlating values from LLMs with eye-tracking measures, which are widely recognized as meaningful indicators of human reading patterns. Our findings reveal that LLMs exhibit a similar prediction pattern with humans but distinct from that of Shallow Language Models (SLMs). Moreover, with the escalation of LLM layers from the middle layers, the correlation coefficients also increase in FFN and MHSA, indicating that the logits within FFN increasingly encapsulate word semantics suitable for predicting tokens from the vocabulary.
Anthology ID:
2024.neusymbridge-1.1
Volume:
Proceedings of the Workshop: Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LREC-COLING-2024
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Tiansi Dong, Erhard Hinrichs, Zhen Han, Kang Liu, Yangqiu Song, Yixin Cao, Christian F. Hempelmann, Rafet Sifa
Venues:
NeusymBridge | WS
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
1–7
Language:
URL:
https://aclanthology.org/2024.neusymbridge-1.1
DOI:
Bibkey:
Cite (ACL):
Xintong Wang, Xiaoyu Li, Xingshan Li, and Chris Biemann. 2024. Probing Large Language Models from a Human Behavioral Perspective. In Proceedings of the Workshop: Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LREC-COLING-2024, pages 1–7, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Probing Large Language Models from a Human Behavioral Perspective (Wang et al., NeusymBridge-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.neusymbridge-1.1.pdf