Decompose-ToM: Enhancing Theory of Mind Reasoning in Large Language Models through Simulation and Task Decomposition

Sneheel Sarangi, Maha Elgarf, Hanan Salam


Abstract
Theory of Mind (ToM) is the ability to under- stand and reflect on the mental states of oth- ers. Although this capability is crucial for hu- man interaction, testing on Large Language Models (LLMs) reveals that they possess only a rudimentary understanding of it. Although the most capable closed-source LLMs have come close to human performance on some ToM tasks, they still perform poorly on com- plex variations of the task that involve more structured reasoning. In this work, we utilize the concept of “pretend-play”, or “Simulation Theory” from cognitive psychology to propose “Decompose-ToM”: an LLM-based inference algorithm that improves model performance on complex ToM tasks. We recursively simu- late user perspectives and decompose the ToM task into a simpler set of tasks: subject identi- fication, question-reframing, world model up- dation, and knowledge availability. We test the algorithm on higher-order ToM tasks and a task testing for ToM capabilities in a conversa- tional setting, demonstrating that our approach shows significant improvement across models compared to baseline methods while requiring minimal prompt tuning across tasks and no ad- ditional model training.
Anthology ID:
2025.coling-main.682
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10228–10241
Language:
URL:
https://aclanthology.org/2025.coling-main.682/
DOI:
Bibkey:
Cite (ACL):
Sneheel Sarangi, Maha Elgarf, and Hanan Salam. 2025. Decompose-ToM: Enhancing Theory of Mind Reasoning in Large Language Models through Simulation and Task Decomposition. In Proceedings of the 31st International Conference on Computational Linguistics, pages 10228–10241, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Decompose-ToM: Enhancing Theory of Mind Reasoning in Large Language Models through Simulation and Task Decomposition (Sarangi et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.682.pdf