Rethinking Multi-Modal Alignment in Multi-Choice VideoQA from Feature and Sample Perspectives

Shaoning Xiao, Long Chen, Kaifeng Gao, Zhao Wang, Yi Yang, Zhimeng Zhang, Jun Xiao


Abstract
Reasoning about causal and temporal event relations in videos is a new destination of Video Question Answering (VideoQA). The major stumbling block to achieve this purpose is the semantic gap between language and video since they are at different levels of abstraction. Existing efforts mainly focus on designing sophisticated architectures while utilizing frame- or object-level visual representations. In this paper, we reconsider the multi-modal alignment problem in VideoQA from feature and sample perspectives to achieve better performance. From the view of feature, we break down the video into trajectories and first leverage trajectory feature in VideoQA to enhance the alignment between two modalities. Moreover, we adopt a heterogeneous graph architecture and design a hierarchical framework to align both trajectory-level and frame-level visual feature with language feature. In addition, we found that VideoQA models are largely dependent on languagepriors and always neglect visual-language interactions. Thus, two effective yet portable training augmentation strategies are designed to strengthen the cross-modal correspondence ability of our model from the view of sample. Extensive results show that our method outperforms all the state-of the-art models on the challenging NExT-QA benchmark.
Anthology ID:
2022.emnlp-main.561
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8188–8198
Language:
URL:
https://aclanthology.org/2022.emnlp-main.561
DOI:
10.18653/v1/2022.emnlp-main.561
Bibkey:
Cite (ACL):
Shaoning Xiao, Long Chen, Kaifeng Gao, Zhao Wang, Yi Yang, Zhimeng Zhang, and Jun Xiao. 2022. Rethinking Multi-Modal Alignment in Multi-Choice VideoQA from Feature and Sample Perspectives. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8188–8198, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Rethinking Multi-Modal Alignment in Multi-Choice VideoQA from Feature and Sample Perspectives (Xiao et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.561.pdf