Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question Answering

Dongze Hao, Qunbo Wang, Longteng Guo, Jie Jiang, Jing Liu


Abstract
While large pre-trained visual-language models have shown promising results on traditional visual question answering benchmarks, it is still challenging for them to answer complex VQA problems which requires diverse world knowledge. Motivated by the research of retrieval-augmented generation in the field of natural language processing, we use Dense Passage Retrieval (DPR) to retrieve related knowledge to help the model answer questions. However, DPR conduct retrieving in natural language space, which may not ensure comprehensive acquisition of image information. Thus, the retrieved knowledge is not truly conducive to helping answer the question, affecting the performance of the overall system. To address this issue, we propose a novel framework that leverages the visual-language model to select the key knowledge retrieved by DPR and answer questions. The framework consists of two modules: Selector and Answerer, where both are initialized by the MLLM and parameter-efficiently finetuned by self-bootstrapping: find key knowledge in the retrieved knowledge documents using the Selector, and then use them to finetune the Answerer to predict answers; obtain the pseudo-labels of key knowledge documents based on the predictions of the Answerer and weak supervision labels, and then finetune the Selector to select key knowledge; repeat. Our framework significantly enhances the performance of the baseline on the challenging open-domain Knowledge-based VQA benchmark, OK-VQA, achieving a state-of-the-art accuracy of 62.83%.
Anthology ID:
2024.emnlp-main.110
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1857–1868
Language:
URL:
https://aclanthology.org/2024.emnlp-main.110
DOI:
Bibkey:
Cite (ACL):
Dongze Hao, Qunbo Wang, Longteng Guo, Jie Jiang, and Jing Liu. 2024. Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question Answering. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 1857–1868, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Self-Bootstrapped Visual-Language Model for Knowledge Selection and Question Answering (Hao et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.110.pdf