Exploring the GLIDE model for Human Action Effect Prediction

Fangjun Li, David C. Hogg, Anthony G. Cohn


Abstract
We address the following action-effect prediction task. Given an image depicting an initial state of the world and an action expressed in text, predict an image depicting the state of the world following the action. The prediction should have the same scene context as the input image. We explore the use of the recently proposed GLIDE model for performing this task. GLIDE is a generative neural network that can synthesize (inpaint) masked areas of an image, conditioned on a short piece of text. Our idea is to mask-out a region of the input image where the effect of the action is expected to occur. GLIDE is then used to inpaint the masked region conditioned on the required action. In this way, the resulting image has the same background context as the input image, updated to show the effect of the action. We give qualitative results from experiments using the EPIC dataset of ego-centric videos labelled with actions.
Anthology ID:
2022.pvlam-1.1
Volume:
Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Patrizia Paggio, Albert Gatt, Marc Tanti
Venue:
PVLAM
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
1–5
Language:
URL:
https://aclanthology.org/2022.pvlam-1.1
DOI:
Bibkey:
Cite (ACL):
Fangjun Li, David C. Hogg, and Anthony G. Cohn. 2022. Exploring the GLIDE model for Human Action Effect Prediction. In Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind, pages 1–5, Marseille, France. European Language Resources Association.
Cite (Informal):
Exploring the GLIDE model for Human Action Effect Prediction (Li et al., PVLAM 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.pvlam-1.1.pdf