Enhancing Temporal Modeling of Video LLMs via Time Gating

Zi-Yuan Hu, Yiwu Zhong, Shijia Huang, Michael Lyu, Liwei Wang


Abstract
Video Large Language Models (Video LLMs) have achieved impressive performance on video-and-language tasks, such as video question answering. However, most existing Video LLMs neglect temporal information in video data, leading to struggles with temporal-aware video understanding. To address this gap, we propose a Time Gating Video LLM (TG-Vid) designed to enhance temporal modeling through a novel Time Gating module (TG). The TG module employs a time gating mechanism on its sub-modules, comprising gating spatial attention, gating temporal attention, and gating MLP. This architecture enables our model to achieve a robust understanding of temporal information within videos. Extensive evaluation of temporal-sensitive video benchmarks (i.e., MVBench, TempCompass, and NExT-QA) demonstrates that our TG-Vid model significantly outperforms the existing Video LLMs. Further, comprehensive ablation studies validate that the performance gains are attributed to the designs of our TG module. Our code is available at https://github.com/LaVi-Lab/TG-Vid.
Anthology ID:
2024.findings-emnlp.162
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2845–2856
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.162
DOI:
Bibkey:
Cite (ACL):
Zi-Yuan Hu, Yiwu Zhong, Shijia Huang, Michael Lyu, and Liwei Wang. 2024. Enhancing Temporal Modeling of Video LLMs via Time Gating. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 2845–2856, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Enhancing Temporal Modeling of Video LLMs via Time Gating (Hu et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.162.pdf