VideoCoT: A Video Chain-of-Thought Dataset with Active Annotation Tool

Yan Wang, Yawen Zeng, Jingsheng Zheng, Xiaofen Xing, Jin Xu, Xiangmin Xu


Abstract
Multimodal large language models (MLLMs) are flourishing, but mainly focus on images with less attention than videos, especially in sub-fields such as prompt engineering, video chain-of-though (CoT), and instruction tuning on videos. Therefore, we try to explore the collection of CoT datasets in videos to lead to video OpenQA and improve the reasoning ability of MLLMs. Unfortunately, making such video CoT datasets is not an easy task. Given that human annotation is too cumbersome and expensive, while machine-generated is not reliable due to the hallucination issue, we develop an automatic annotation tool that combines machine and human experts, under the active learning paradigm. Active learning is an interactive strategy between the model and human experts, in this way, the workload of human labeling can be reduced and the quality of the dataset can be guaranteed. With the help of the automatic annotation tool, we strive to contribute three datasets, namely VideoCoT, TopicQA, TopicCoT. Furthermore, we propose a simple but effective benchmark based on the collected datasets, which exploits CoT to maximize the complex reasoning capabilities of MLLMs. Extensive experiments demonstrate the effectiveness our solution, and we will release our source codes and datasets to facilitate the research community.
Anthology ID:
2024.alvr-1.8
Volume:
Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Jing Gu, Tsu-Jui (Ray) Fu, Drew Hudson, Asli Celikyilmaz, William Wang
Venues:
ALVR | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
92–101
Language:
URL:
https://aclanthology.org/2024.alvr-1.8
DOI:
10.18653/v1/2024.alvr-1.8
Bibkey:
Cite (ACL):
Yan Wang, Yawen Zeng, Jingsheng Zheng, Xiaofen Xing, Jin Xu, and Xiangmin Xu. 2024. VideoCoT: A Video Chain-of-Thought Dataset with Active Annotation Tool. In Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR), pages 92–101, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
VideoCoT: A Video Chain-of-Thought Dataset with Active Annotation Tool (Wang et al., ALVR-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.alvr-1.8.pdf