ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities

Terry Yue Zhuo, Yaqing Liao, Yuecheng Lei, Lizhen Qu, Gerard de Melo, Xiaojun Chang, Yazhou Ren, Zenglin Xu


Abstract
We introduce ViLPAct, a novel vision-language benchmark for human activity planning. It is designed for a task where embodied AI agents can reason and forecast future actions of humans based on video clips about their initial activities and intents in text. The dataset consists of 2.9k videos from Charades extended with intents via crowdsourcing, a multi-choice question test set, and four strong baselines. One of the baselines implements a neurosymbolic approach based on a multi-modal knowledge base (MKB), while the other ones are deep generative models adapted from recent state-of-the-art (SOTA) methods. According to our extensive experiments, the key challenges are compositional generalization and effective use of information from both modalities.
Anthology ID:
2023.findings-eacl.164
Volume:
Findings of the Association for Computational Linguistics: EACL 2023
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2192–2207
Language:
URL:
https://aclanthology.org/2023.findings-eacl.164
DOI:
10.18653/v1/2023.findings-eacl.164
Bibkey:
Cite (ACL):
Terry Yue Zhuo, Yaqing Liao, Yuecheng Lei, Lizhen Qu, Gerard de Melo, Xiaojun Chang, Yazhou Ren, and Zenglin Xu. 2023. ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities. In Findings of the Association for Computational Linguistics: EACL 2023, pages 2192–2207, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities (Zhuo et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-eacl.164.pdf
Video:
 https://aclanthology.org/2023.findings-eacl.164.mp4