Navigating Hallucinations for Reasoning of Unintentional Activities

Shresth Grover, Vibhav Vineet, Yogesh Rawat


Abstract
In this work we present a novel task of understanding unintentional human activities in videos. We formalize this problem as a reasoning task under zero-shot scenario, where given a video of an unintentional activity we want to know why it transitioned from intentional to unintentional. We first evaluate the effectiveness of current state-of-the-art Large Multimodal Models on this reasoning task and observe that they suffer from hallucination. We further propose a novel prompting technique, termed as Dream of Thoughts (DoT), which allows the model to navigate through hallucinated thoughts to achieve better reasoning. To evaluate the performance on this task, we also introduce three different specialized metrics designed to quantify the models reasoning capability. We perform our experiments on three datasets, OOPs, UCF-Crimes, and ReUAct, and our findings show that DOT prompting technique is able to outperform standard prompting, while minimizing hallucinations.
Anthology ID:
2024.findings-emnlp.565
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9666–9680
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.565
DOI:
Bibkey:
Cite (ACL):
Shresth Grover, Vibhav Vineet, and Yogesh Rawat. 2024. Navigating Hallucinations for Reasoning of Unintentional Activities. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 9666–9680, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Navigating Hallucinations for Reasoning of Unintentional Activities (Grover et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.565.pdf