Co-VQA : Answering by Interactive Sub Question Sequence

Ruonan Wang, Yuxi Qian, Fangxiang Feng, Xiaojie Wang, Huixing Jiang


Abstract
Most existing approaches to Visual Question Answering (VQA) answer questions directly, however, people usually decompose a complex question into a sequence of simple sub questions and finally obtain the answer to the original question after answering the sub question sequence(SQS). By simulating the process, this paper proposes a conversation-based VQA (Co-VQA) framework, which consists of three components: Questioner, Oracle, and Answerer. Questioner raises the sub questions using an extending HRED model, and Oracle answers them one-by-one. An Adaptive Chain Visual Reasoning Model (ACVRM) for Answerer is also proposed, where the question-answer pair is used to update the visual representation sequentially. To perform supervised learning for each model, we introduce a well-designed method to build a SQS for each question on VQA 2.0 and VQA-CP v2 datasets. Experimental results show that our method achieves state-of-the-art on VQA-CP v2. Further analyses show that SQSs help build direct semantic connections between questions and images, provide question-adaptive variable-length reasoning chains, and with explicit interpretability as well as error traceability.
Anthology ID:
2022.findings-acl.188
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2396–2408
Language:
URL:
https://aclanthology.org/2022.findings-acl.188
DOI:
10.18653/v1/2022.findings-acl.188
Bibkey:
Cite (ACL):
Ruonan Wang, Yuxi Qian, Fangxiang Feng, Xiaojie Wang, and Huixing Jiang. 2022. Co-VQA : Answering by Interactive Sub Question Sequence. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2396–2408, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Co-VQA : Answering by Interactive Sub Question Sequence (Wang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.188.pdf
Data
GuessWhat?!Visual GenomeVisual Question AnsweringVisual Question Answering v2.0