Video Language Co-Attention with Multimodal Fast-Learning Feature Fusion for VideoQA

Adnen Abdessaied, Ekta Sood, Andreas Bulling


Abstract
We propose the Video Language Co-Attention Network (VLCN) – a novel memory-enhanced model for Video Question Answering (VideoQA). Our model combines two original contributions”:” A multi-modal fast-learning feature fusion (FLF) block and a mechanism that uses self-attended language features to separately guide neural attention on both static and dynamic visual features extracted from individual video frames and short video clips. When trained from scratch, VLCN achieves competitive results with the state of the art on both MSVD-QA and MSRVTT-QA with 38.06% and 36.01% test accuracies, respectively. Through an ablation study, we further show that FLF improves generalization across different VideoQA datasets and performance for question types that are notoriously challenging in current datasets, such as long questions that require deeper reasoning as well as questions with rare answers.
Anthology ID:
2022.repl4nlp-1.15
Volume:
Proceedings of the 7th Workshop on Representation Learning for NLP
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Spandana Gella, He He, Bodhisattwa Prasad Majumder, Burcu Can, Eleonora Giunchiglia, Samuel Cahyawijaya, Sewon Min, Maximilian Mozes, Xiang Lorraine Li, Isabelle Augenstein, Anna Rogers, Kyunghyun Cho, Edward Grefenstette, Laura Rimell, Chris Dyer
Venue:
RepL4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
143–155
Language:
URL:
https://aclanthology.org/2022.repl4nlp-1.15
DOI:
10.18653/v1/2022.repl4nlp-1.15
Bibkey:
Cite (ACL):
Adnen Abdessaied, Ekta Sood, and Andreas Bulling. 2022. Video Language Co-Attention with Multimodal Fast-Learning Feature Fusion for VideoQA. In Proceedings of the 7th Workshop on Representation Learning for NLP, pages 143–155, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Video Language Co-Attention with Multimodal Fast-Learning Feature Fusion for VideoQA (Abdessaied et al., RepL4NLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.repl4nlp-1.15.pdf
Video:
 https://aclanthology.org/2022.repl4nlp-1.15.mp4
Data
MSVD-QA