MuKA: Multimodal Knowledge Augmented Visual Information-Seeking

Lianghao Deng, Yuchong Sun, Shizhe Chen, Ning Yang, Yunfeng Wang, Ruihua Song


Abstract
The visual information-seeking task aims to answer visual questions that require external knowledge, such as “On what date did this building officially open?”. Existing methods using retrieval-augmented generation framework primarily rely on textual knowledge bases to assist multimodal large language models (MLLMs) in answering questions. However, the text-only knowledge can impair information retrieval for the multimodal query of image and question, and also confuse MLLMs in selecting the most relevant information during generation. In this work, we propose a novel framework MuKA which leverages a multimodal knowledge base to address these limitations. Specifically, we construct a multimodal knowledge base by automatically pairing images with text passages in existing datasets. We then design a fine-grained multimodal interaction to effectively retrieve multimodal documents and enrich MLLMs with both retrieved texts and images. MuKA outperforms state-of-the-art methods by 38.7% and 15.9% on the InfoSeek and E-VQA benchmark respectively, demonstrating the importance of multimodal knowledge in enhancing both retrieval and answer generation.
Anthology ID:
2025.coling-main.647
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9675–9686
Language:
URL:
https://aclanthology.org/2025.coling-main.647/
DOI:
Bibkey:
Cite (ACL):
Lianghao Deng, Yuchong Sun, Shizhe Chen, Ning Yang, Yunfeng Wang, and Ruihua Song. 2025. MuKA: Multimodal Knowledge Augmented Visual Information-Seeking. In Proceedings of the 31st International Conference on Computational Linguistics, pages 9675–9686, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
MuKA: Multimodal Knowledge Augmented Visual Information-Seeking (Deng et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.647.pdf