Large Language Models Know What is Key Visual Entity: An LLM-assisted Multimodal Retrieval for VQA

Pu Jian, Donglei Yu, Jiajun Zhang


Abstract
Visual question answering (VQA) tasks, often performed by visual language model (VLM), face challenges with long-tail knowledge. Recent retrieval-augmented VQA (RA-VQA) systems address this by retrieving and integrating external knowledge sources. However, these systems still suffer from redundant visual information irrelevant to the question during retrieval. To address these issues, in this paper, we propose LLM-RA, a novel method leveraging the reasoning capability of a large language model (LLM) to identify key visual entities, thus minimizing the impact of irrelevant information in the query of retriever. Furthermore, key visual entities are independently encoded for multimodal joint retrieval, preventing cross-entity interference. Experimental results demonstrate that our method outperforms other strong RA-VQA systems. In two knowledge-intensive VQA benchmarks, our method achieves the new state-of-the-art performance among those with similar scale of parameters and even performs comparably to models with 1-2 orders larger parameters.
Anthology ID:
2024.emnlp-main.613
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10939–10956
Language:
URL:
https://aclanthology.org/2024.emnlp-main.613
DOI:
Bibkey:
Cite (ACL):
Pu Jian, Donglei Yu, and Jiajun Zhang. 2024. Large Language Models Know What is Key Visual Entity: An LLM-assisted Multimodal Retrieval for VQA. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 10939–10956, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Large Language Models Know What is Key Visual Entity: An LLM-assisted Multimodal Retrieval for VQA (Jian et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.613.pdf