Lionel Wong


2025

pdf bib
Language-Informed Synthesis of Rational Agent Models for Grounded Theory-of-Mind Reasoning On-the-fly
Lance Ying | Ryan Truong | Katherine M. Collins | Cedegao E. Zhang | Megan Wei | Tyler BrookeWilson | Tan Zhi-Xuan | Lionel Wong | Joshua B. Tenenbaum
Findings of the Association for Computational Linguistics: EMNLP 2025

Drawing real world social inferences usually requires taking into account information from multiple modalities. Language is a particularly powerful source of information in social settings, especially in novel situations where language can provide both abstract information about the environment dynamics and concrete specifics about an agent that cannot be easily visually observed. In this paper, we propose Language-Informed Rational Agent Synthesis (LIRAS), a framework for drawing context-specific social inferences that integrate linguistic and visual inputs. LIRAS frames multimodal social reasoning as a process of constructing structured but situation-specific agent and environment representations – leveraging multimodal language models to parse language and visual inputs into unified symbolic representations, over which a Bayesian inverse planning engine can be run to produce granular probabilistic judgments. On a range of existing and new social reasoning tasks derived from cognitive science experiments, we find that our model (instantiated with a comparatively lightweight VLM) outperforms ablations and state-of-the-art models in capturing human judgments across all domains.

pdf bib
Understanding Epistemic Language with a Language-augmented Bayesian Theory of Mind
Lance Ying | Tan Zhi-Xuan | Lionel Wong | Vikash Mansinghka | Joshua B. Tenenbaum
Transactions of the Association for Computational Linguistics, Volume 13

How do people understand and evaluate claims about others’ beliefs, even though these beliefs cannot be directly observed? In this paper, we introduce a cognitive model of epistemic language interpretation, grounded in Bayesian inferences about other agents’ goals, beliefs, and intentions: a language-augmented Bayesian theory-of-mind (LaBToM). By translating natural language into an epistemic “language-of-thought” with grammar-constrained LLM decoding, then evaluating these translations against the inferences produced by inverting a generative model of rational action and perception, LaBToM captures graded plausibility judgments of epistemic claims. We validate our model in an experiment where participants watch an agent navigate a maze to find keys hidden in boxes needed to reach their goal, then rate sentences about the agent’s beliefs. In contrast with multimodal LLMs (GPT-4o, Gemini Pro) and ablated models, our model correlates highly with human judgments for a wide range of expressions, including modal language, uncertainty expressions, knowledge claims, likelihood comparisons, and attributions of false belief.

2024

pdf bib
Proceedings of the 2nd Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2024)
Bhavana Dalvi Mishra | Greg Durrett | Peter Jansen | Ben Lipkin | Danilo Neves Ribeiro | Lionel Wong | Xi Ye | Wenting Zhao
Proceedings of the 2nd Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2024)