Tag-grounded Visual Instruction Tuning with Retrieval Augmentation

Daiqing Qi, Handong Zhao, Zijun Wei, Sheng Li


Abstract
Despite recent advances in the general visual instruction-following ability of Multimodal Large Language Models (MLLMs), they still struggle with critical problems when required to provide a precise and detailed response to a visual instruction: (1) failure to identify novel objects or entities, (2) mention of non-existent objects, and (3) neglect of object’s attributed details. Intuitive solutions include improving the size and quality of data or using larger foundation models. They show effectiveness in mitigating these issues, but at an expensive cost of collecting a vast amount of new data and introducing a significantly larger model. Standing at the intersection of these approaches, we examine the three object-oriented problems from the perspective of the image-to-text mapping process by the multimodal connector. In this paper, we first identify the limitations of multimodal connectors stemming from insufficient training data. Driven by this, we propose to enhance the mapping with retrieval-augmented tag tokens, which contain rich object-aware information such as object names and attributes. With our Tag-grounded visual instruction tuning with retrieval Augmentation (TUNA), we outperform baselines that share the same language model and training data on 12 benchmarks. Furthermore, we show the zero-shot capability of TUNA when provided with specific datastores.
Anthology ID:
2024.emnlp-main.120
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2008–2026
Language:
URL:
https://aclanthology.org/2024.emnlp-main.120
DOI:
10.18653/v1/2024.emnlp-main.120
Bibkey:
Cite (ACL):
Daiqing Qi, Handong Zhao, Zijun Wei, and Sheng Li. 2024. Tag-grounded Visual Instruction Tuning with Retrieval Augmentation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2008–2026, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Tag-grounded Visual Instruction Tuning with Retrieval Augmentation (Qi et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.120.pdf