Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided Revision

Seongyun Lee, Sue Park, Yongrae Jo, Minjoon Seo


Abstract
Large multimodal models suffer from multimodal hallucination, where they provide incorrect responses misaligned with the given visual information. Recent works have conjectured that one of the reasons behind multimodal hallucination is due to the vision encoder failing to ground on the image properly. To mitigate this issue, we propose a novel approach that leverages self-feedback as visual cues. Building on this approach, we introduce Volcano, a multimodal self-feedback guided revision model. Volcano generates natural language feedback to its initial response based on the provided visual information and utilizes this feedback to self-revise its initial response. Volcano effectively reduces multimodal hallucination and achieves state-of-the-art on MMHal-Bench, POPE, and GAVIE. It also improves on general multimodal abilities and outperforms previous models on MM-Vet and MMBench. Through qualitative analysis, we show that Volcano’s feedback is properly grounded on the image than the initial response. This indicates that Volcano can provide itself with richer visual information through feedback generation, leading to self-correct hallucinations. We publicly release our model, data, and code at https://github.com/kaistAI/Volcanogithub.com/kaistAI/Volcano
Anthology ID:
2024.naacl-long.23
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
391–404
Language:
URL:
https://aclanthology.org/2024.naacl-long.23
DOI:
Bibkey:
Cite (ACL):
Seongyun Lee, Sue Park, Yongrae Jo, and Minjoon Seo. 2024. Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided Revision. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 391–404, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided Revision (Lee et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.23.pdf
Copyright:
 2024.naacl-long.23.copyright.pdf