Anthony G. Cohn
2024
A Notion of Complexity for Theory of Mind via Discrete World Models
X. Angelo Huang
|
Emanuele La Malfa
|
Samuele Marro
|
Andrea Asperti
|
Anthony G. Cohn
|
Michael J. Wooldridge
Findings of the Association for Computational Linguistics: EMNLP 2024
Theory of Mind (ToM) can be used to assess the capabilities of Large Language Models (LLMs) in complex scenarios where social reasoning is required. While the research community has proposed many ToM benchmarks, their hardness varies greatly, and their complexity is not well defined. This work proposes a framework inspired by cognitive load theory to measure the complexity of ToM tasks. We quantify a problem’s complexity as the number of states necessary to solve it correctly. Our complexity measure also accounts for spurious states of a ToM problem designed to make it apparently harder. We use our method to assess the complexity of five widely adopted ToM benchmarks. On top of this framework, we design a prompting technique that augments the information available to a model with a description of how the environment changes with the agents’ interactions. We name this technique Discrete World Models (DWM) and show how it elicits superior performance on ToM tasks.
2022
Exploring the GLIDE model for Human Action Effect Prediction
Fangjun Li
|
David C. Hogg
|
Anthony G. Cohn
Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind
We address the following action-effect prediction task. Given an image depicting an initial state of the world and an action expressed in text, predict an image depicting the state of the world following the action. The prediction should have the same scene context as the input image. We explore the use of the recently proposed GLIDE model for performing this task. GLIDE is a generative neural network that can synthesize (inpaint) masked areas of an image, conditioned on a short piece of text. Our idea is to mask-out a region of the input image where the effect of the action is expected to occur. GLIDE is then used to inpaint the masked region conditioned on the required action. In this way, the resulting image has the same background context as the input image, updated to show the effect of the action. We give qualitative results from experiments using the EPIC dataset of ego-centric videos labelled with actions.
2017
Natural Language Grounding and Grammar Induction for Robotic Manipulation Commands
Muhannad Alomari
|
Paul Duckworth
|
Majd Hawasly
|
David C. Hogg
|
Anthony G. Cohn
Proceedings of the First Workshop on Language Grounding for Robotics
We present a cognitively plausible system capable of acquiring knowledge in language and vision from pairs of short video clips and linguistic descriptions. The aim of this work is to teach a robot manipulator how to execute natural language commands by demonstration. This is achieved by first learning a set of visual ‘concepts’ that abstract the visual feature spaces into concepts that have human-level meaning. Second, learning the mapping/grounding between words and the extracted visual concepts. Third, inducing grammar rules via a semantic representation known as Robot Control Language (RCL). We evaluate our approach against state-of-the-art supervised and unsupervised grounding and grammar induction systems, and show that a robot can learn to execute never seen-before commands from pairs of unlabelled linguistic and visual inputs.
Search
Co-authors
- David C. Hogg 2
- X. Angelo Huang 1
- Emanuele La Malfa 1
- Samuele Marro 1
- Andrea Asperti 1
- show all...