ActPlan-1K: Benchmarking the Procedural Planning Ability of Visual Language Models in Household Activities

Ying Su, Zhan Ling, Haochen Shi, Cheng Jiayang, Yauwai Yim, Yangqiu Song


Abstract
Large language models(LLMs) have been adopted to process textual task description and accomplish procedural planning in embodied AI tasks because of their powerful reasoning ability. However, there is still lack of study on how vision language models(VLMs) behave when multi-modal task inputs are considered. Counterfactual planning that evaluates the model’s reasoning ability over alternative task situations are also under exploited. In order to evaluate the planning ability of both multi-modal and counterfactual aspects, we propose ActPlan-1K. ActPlan-1K is a multi-modal planning benchmark constructed based on ChatGPT and household activity simulator iGibson2. The benchmark consists of 153 activities and 1,187 instances. Each instance describing one activity has a natural language task description and multiple environment images from the simulator. The gold plan of each instance is action sequences over the objects in provided scenes. Both the correctness and commonsense satisfaction are evaluated on typical VLMs. It turns out that current VLMs are still struggling at generating human-level procedural plans for both normal activities and counterfactual activities. We further provide automatic evaluation metrics by finetuning over BLEURT model to facilitate future research on our benchmark.
Anthology ID:
2024.emnlp-main.833
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14953–14965
Language:
URL:
https://aclanthology.org/2024.emnlp-main.833
DOI:
Bibkey:
Cite (ACL):
Ying Su, Zhan Ling, Haochen Shi, Cheng Jiayang, Yauwai Yim, and Yangqiu Song. 2024. ActPlan-1K: Benchmarking the Procedural Planning Ability of Visual Language Models in Household Activities. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 14953–14965, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
ActPlan-1K: Benchmarking the Procedural Planning Ability of Visual Language Models in Household Activities (Su et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.833.pdf
Data:
 2024.emnlp-main.833.data.zip