EchoSight: Advancing Visual-Language Models with Wiki Knowledge

Yibin Yan, Weidi Xie


Abstract
Knowledge-based Visual Question Answering (KVQA) tasks require answering questions about images using extensive background knowledge. Despite significant advancements, generative models often struggle with these tasks due to the limited integration of external knowledge. In this paper, we introduce **EchoSight**, a novel multimodal Retrieval-Augmented Generation (RAG) framework that enables large language models (LLMs) to answer visual questions requiring fine-grained encyclopedic knowledge. To strive for high-performing retrieval, EchoSight first searches wiki articles by using visual-only information, subsequently, these candidate articles are further reranked according to their relevance to the combined text-image query. This approach significantly improves the integration of multimodal knowledge, leading to enhanced retrieval outcomes and more accurate VQA responses. Our experimental results on the E-VQA and InfoSeek datasets demonstrate that EchoSight establishes new state-of-the-art results in knowledge-based VQA, achieving an accuracy of 41.8% on E-VQA and 31.3% on InfoSeek.
Anthology ID:
2024.findings-emnlp.83
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1538–1551
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.83
DOI:
Bibkey:
Cite (ACL):
Yibin Yan and Weidi Xie. 2024. EchoSight: Advancing Visual-Language Models with Wiki Knowledge. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 1538–1551, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
EchoSight: Advancing Visual-Language Models with Wiki Knowledge (Yan & Xie, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.83.pdf