Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask Questions

Ziyue Wang, Chi Chen, Peng Li, Yang Liu


Abstract
Large Language Models (LLMs) demonstrate impressive reasoning ability and the maintenance of world knowledge not only in natural language tasks, but also in some vision-language tasks such as open-domain knowledge-based visual question answering (OK-VQA). As images are invisible to LLMs, researchers convert images to text to engage LLMs into the visual question reasoning procedure. This leads to discrepancies between images and their textual representations presented to LLMs, which consequently impedes final reasoning performance. To fill the information gap and better leverage the reasoning capability, we design a framework that enables LLMs to proactively ask relevant questions to unveil more details in the image, along with filters for refining the generated information. We validate our idea on OK-VQA and A-OKVQA. Our method continuously boosts the performance of baselines methods by an average gain of 2.15% on OK-VQA, and achieves consistent improvements across different LLMs.
Anthology ID:
2023.findings-emnlp.189
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2874–2890
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.189
DOI:
10.18653/v1/2023.findings-emnlp.189
Bibkey:
Cite (ACL):
Ziyue Wang, Chi Chen, Peng Li, and Yang Liu. 2023. Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask Questions. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2874–2890, Singapore. Association for Computational Linguistics.
Cite (Informal):
Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask Questions (Wang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.189.pdf