Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?

Gregor Geigle, Radu Timofte, Goran Glavaš


Abstract
Large vision-language models (LVLMs) have recently dramatically pushed the state of the art in image captioning and many image understanding tasks (e.g., visual question answering). LVLMs, however, often hallucinate and produce captions that mention concepts that cannot be found in the image. These hallucinations erode the trustworthiness of LVLMs and are arguably among the main obstacles to their ubiquitous adoption. Recent work suggests that addition of grounding objectives—those that explicitly align image regions or objects to text spans—reduces the amount of LVLM hallucination. Although intuitive, this claim is not empirically justified as the reduction effects have been established, we argue, with flawed evaluation protocols that (i) rely on data (i.e., MSCOCO) that has been extensively used in LVLM training and (ii) measure hallucination via question answering rather than open-ended caption generation.In this work, in contrast, we offer the first systematic analysis of the effect of fine-grained object grounding on LVLM hallucination under an evaluation protocol that more realistically captures LVLM hallucination in open generation. Our extensive experiments over three backbone LLMs reveal that grounding objectives have little to no effect on object hallucination in open caption generation.
Anthology ID:
2024.emnlp-main.159
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2728–2742
Language:
URL:
https://aclanthology.org/2024.emnlp-main.159
DOI:
Bibkey:
Cite (ACL):
Gregor Geigle, Radu Timofte, and Goran Glavaš. 2024. Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models?. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2728–2742, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Does Object Grounding Really Reduce Hallucination of Large Vision-Language Models? (Geigle et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.159.pdf