Yazhou Ren
2023
ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Terry Yue Zhuo
|
Yaqing Liao
|
Yuecheng Lei
|
Lizhen Qu
|
Gerard de Melo
|
Xiaojun Chang
|
Yazhou Ren
|
Zenglin Xu
Findings of the Association for Computational Linguistics: EACL 2023
We introduce ViLPAct, a novel vision-language benchmark for human activity planning. It is designed for a task where embodied AI agents can reason and forecast future actions of humans based on video clips about their initial activities and intents in text. The dataset consists of 2.9k videos from Charades extended with intents via crowdsourcing, a multi-choice question test set, and four strong baselines. One of the baselines implements a neurosymbolic approach based on a multi-modal knowledge base (MKB), while the other ones are deep generative models adapted from recent state-of-the-art (SOTA) methods. According to our extensive experiments, the key challenges are compositional generalization and effective use of information from both modalities.
Search
Co-authors
- Terry Yue Zhuo 1
- Yaqing Liao 1
- Yuecheng Lei 1
- Lizhen Qu 1
- Gerard De Melo 1
- show all...