Does Visual Grounding Enhance the Understanding of Embodied Knowledge in Large Language Models?

Zhihui Yang, Yupei Wang, Kaijie Mo, Zhe Zhao, Renfen Hu


Abstract
Despite significant progress in multimodal language models (LMs), it remains unclear whether visual grounding enhances their understanding of embodied knowledge compared to text-only models. To address this question, we propose a novel embodied knowledge understanding benchmark based on the perceptual theory from psychology, encompassing visual, auditory, tactile, gustatory, olfactory external senses, and interoception. The benchmark assesses the models’ perceptual abilities across different sensory modalities through vector comparison and question-answering tasks with over 1,700 questions. By comparing 30 state-of-the-art LMs, we surprisingly find that vision-language models (VLMs) do not outperform text-only models in either task. Moreover, the models perform significantly worse in the visual dimension compared to other sensory dimensions. Further analysis reveals that the vector representations are easily influenced by word form and frequency, and the models struggle to answer questions involving spatial perception and reasoning. Our findings underscore the need for more effective integration of embodied knowledge in LMs to enhance their understanding of the physical world.
Anthology ID:
2025.findings-emnlp.920
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16960–16978
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.920/
DOI:
Bibkey:
Cite (ACL):
Zhihui Yang, Yupei Wang, Kaijie Mo, Zhe Zhao, and Renfen Hu. 2025. Does Visual Grounding Enhance the Understanding of Embodied Knowledge in Large Language Models?. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 16960–16978, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Does Visual Grounding Enhance the Understanding of Embodied Knowledge in Large Language Models? (Yang et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.920.pdf
Checklist:
 2025.findings-emnlp.920.checklist.pdf