Question-Instructed Visual Descriptions for Zero-Shot Video Answering

David Mogrovejo, Thamar Solorio


Abstract
We present Q-ViD, a simple approach for video question answering (video QA), that unlike prior methods, which are based on complex architectures, computationally expensive pipelines or use closed models like GPTs, Q-ViD relies on a single instruction-aware open vision-language model (InstructBLIP) to tackle videoQA using frame descriptions. Specifically, we create captioning instruction prompts that rely on the target questions about the videos and leverage InstructBLIP to obtain video frame captions that are useful to the task at hand. Subsequently, we form descriptions of the whole video using the question-dependent frame captions, and feed that information, along with a question-answering prompt, to a large language model (LLM). The LLM is our reasoning module, and performs the final step of multiple-choice QA. Our simple Q-ViD framework achieves competitive or even higher performances than current state of the art models on a diverse range of videoQA benchmarks, including NExT-QA, STAR, How2QA, TVQA and IntentQA.
Anthology ID:
2024.findings-acl.555
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9329–9339
Language:
URL:
https://aclanthology.org/2024.findings-acl.555
DOI:
Bibkey:
Cite (ACL):
David Mogrovejo and Thamar Solorio. 2024. Question-Instructed Visual Descriptions for Zero-Shot Video Answering. In Findings of the Association for Computational Linguistics ACL 2024, pages 9329–9339, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Question-Instructed Visual Descriptions for Zero-Shot Video Answering (Mogrovejo & Solorio, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.555.pdf