Exploring Large Language Models’ World Perception: A Multi-Dimensional Evaluation through Data Distribution

Zhi Li, Jing Yang, Ying Liu


Abstract
In recent years, large language models (LLMs) have achieved remarkable success across diverse natural language processing tasks. Nevertheless, their capacity to process and reflect core human experiences remains underexplored. Current benchmarks for LLM evaluation typically focus on a single aspect of linguistic understanding, thus failing to capture the full breadth of its abstract reasoning about the world. To address this gap, we propose a multidimensional paradigm to investigate the capacity of LLMs to perceive the world through temporal, spatial, sentimental, and causal aspects. We conduct extensive experiments by partitioning datasets according to different distributions and employing various prompting strategies. Our findings reveal significant differences and shortcomings in how LLMs handle temporal granularity, multi-hop spatial reasoning, subtle sentiments, and implicit causal relationships. While sophisticated prompting approaches can mitigate some of these limitations, substantial challenges persist in effectively capturing abstract human perception. We aspire that this work, which assesses LLMs from multiple perspectives of human understanding of the world, will guide more instructive research on the LLMs’ perception or cognition.
Anthology ID:
2025.blackboxnlp-1.24
Volume:
Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Yonatan Belinkov, Aaron Mueller, Najoung Kim, Hosein Mohebbi, Hanjie Chen, Dana Arad, Gabriele Sarti
Venues:
BlackboxNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
415–432
Language:
URL:
https://aclanthology.org/2025.blackboxnlp-1.24/
DOI:
Bibkey:
Cite (ACL):
Zhi Li, Jing Yang, and Ying Liu. 2025. Exploring Large Language Models’ World Perception: A Multi-Dimensional Evaluation through Data Distribution. In Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 415–432, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Exploring Large Language Models’ World Perception: A Multi-Dimensional Evaluation through Data Distribution (Li et al., BlackboxNLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.blackboxnlp-1.24.pdf