Zero-shot Visual Question Answering with Language Model Feedback

Yifan Du, Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen


Abstract
In this paper, we propose a novel language model guided captioning approach, LAMOC, for knowledge-based visual question answering (VQA). Our approach employs the generated captions by a captioning model as the context of an answer prediction model, which is a Pre-Trained Language model (PLM). As the major contribution, we leverage the guidance and feedback of the prediction model to improve the capability of the captioning model. In this way, the captioning model can become aware of the task goal and information need from the PLM. To develop our approach, we design two specific training stages, where the first stage adapts the captioning model to the prediction model (selecting more suitable caption propositions for training) and the second stage tunes the captioning model according to the task goal (learning from feedback of the PLM). Extensive experiments demonstrate the effectiveness of the proposed approach on the knowledge-based VQA task. Specifically, on the challenging A-OKVQA dataset, LAMOC outperforms several competitive zero-shot methods and even achieves comparable results to a fine-tuned VLP model. Our code is publicly available at https://github.com/RUCAIBox/LAMOC.
Anthology ID:
2023.findings-acl.590
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9268–9281
Language:
URL:
https://aclanthology.org/2023.findings-acl.590
DOI:
10.18653/v1/2023.findings-acl.590
Bibkey:
Cite (ACL):
Yifan Du, Junyi Li, Tianyi Tang, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Zero-shot Visual Question Answering with Language Model Feedback. In Findings of the Association for Computational Linguistics: ACL 2023, pages 9268–9281, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Zero-shot Visual Question Answering with Language Model Feedback (Du et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.590.pdf