ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of Mind

Xiaomeng Ma, Lingyu Gao, Qihui Xu


Abstract
Theory of Mind (ToM), the capacity to comprehend the mental states of distinct individuals, is essential for numerous practical applications. With the development of large language models (LLMs), there is a heated debate about whether they are able to perform ToM tasks. Previous studies have used different tasks and prompts to test the ToM on LLMs and the results are inconsistent: some studies asserted these models are capable of exhibiting ToM, while others suggest the opposite. In this study, We present ToMChallenges, a dataset for comprehensively evaluating the Theory of Mind based on Sally-Anne and Smarties tests with a diverse set of tasks. In addition, we also propose an auto-grader to streamline the answer evaluation process. We tested three models: davinci, turbo, and gpt-4. Our evaluation results and error analyses show that LLMs have inconsistent behaviors across prompts and tasks. Performing the ToM tasks robustly remains a challenge for the LLMs. In addition, our paper wants to raise awareness in evaluating the ToM in LLMs and we want to invite more discussion on how to design the prompts and tasks for ToM tasks that can better access the LLMs’ ability.
Anthology ID:
2023.conll-1.2
Volume:
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)
Month:
December
Year:
2023
Address:
Singapore
Editors:
Jing Jiang, David Reitter, Shumin Deng
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15–26
Language:
URL:
https://aclanthology.org/2023.conll-1.2
DOI:
10.18653/v1/2023.conll-1.2
Bibkey:
Cite (ACL):
Xiaomeng Ma, Lingyu Gao, and Qihui Xu. 2023. ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of Mind. In Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL), pages 15–26, Singapore. Association for Computational Linguistics.
Cite (Informal):
ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks for Exploring Theory of Mind (Ma et al., CoNLL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.conll-1.2.pdf
Video:
 https://aclanthology.org/2023.conll-1.2.mp4