VIVA: A Benchmark for Vision-Grounded Decision-Making with Human Values

Zhe Hu, Yixiao Ren, Jing Li, Yu Yin


Abstract
This paper introduces VIVA, a benchmark for VIsion-grounded decision-making driven by human VA. While most large vision-language models (VLMs) focus on physical-level skills, our work is the first to examine their multimodal capabilities in leveraging human values to make decisions under a vision-depicted situation. VIVA contains 1,062 images depicting diverse real-world situations and the manually annotated decisions grounded in them. Given an image there, the model should select the most appropriate action to address the situation and provide the relevant human values and reason underlying the decision. Extensive experiments based on VIVA show the limitation of VLMs in using human values to make multimodal decisions. Further analyses indicate the potential benefits of exploiting action consequences and predicted human values.
Anthology ID:
2024.emnlp-main.137
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2294–2311
Language:
URL:
https://aclanthology.org/2024.emnlp-main.137/
DOI:
10.18653/v1/2024.emnlp-main.137
Bibkey:
Cite (ACL):
Zhe Hu, Yixiao Ren, Jing Li, and Yu Yin. 2024. VIVA: A Benchmark for Vision-Grounded Decision-Making with Human Values. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2294–2311, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
VIVA: A Benchmark for Vision-Grounded Decision-Making with Human Values (Hu et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.137.pdf
Data:
 2024.emnlp-main.137.data.zip