Yicong Li
2024
Video-Language Understanding: A Survey from Model Architecture, Model Training, and Data Perspectives
Thong Nguyen
|
Yi Bin
|
Junbin Xiao
|
Leigang Qu
|
Yicong Li
|
Jay Zhangjie Wu
|
Cong-Duy Nguyen
|
See-Kiong Ng
|
Anh Tuan Luu
Findings of the Association for Computational Linguistics: ACL 2024
Humans use multiple senses to comprehend the environment. Vision and language are two of the most vital senses since they allow us to easily communicate our thoughts and perceive the world around us. There has been a lot of interest in creating video-language understanding systems with human-like senses since a video-language pair can mimic both our linguistic medium and visual environment with temporal dynamics. In this survey, we review the key tasks of these systems and highlight the associated challenges. Based on the challenges, we summarize their methods from model architecture, model training, and data perspectives. We also conduct performance comparison among the methods, and discuss promising directions for future research.
2022
Video Question Answering: Datasets, Algorithms and Challenges
Yaoyao Zhong
|
Wei Ji
|
Junbin Xiao
|
Yicong Li
|
Weihong Deng
|
Tat-Seng Chua
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
This survey aims to sort out the recent advances in video question answering (VideoQA) and point towards future directions. We firstly categorize the datasets into 1) normal VideoQA, multi-modal VideoQA and knowledge-based VideoQA, according to the modalities invoked in the question-answer pairs, or 2) factoid VideoQA and inference VideoQA, according to the technical challenges in comprehending the questions and deriving the correct answers. We then summarize the VideoQA techniques, including those mainly designed for Factoid QA (e.g., the early spatio-temporal attention-based methods and the recently Transformer-based ones) and those targeted at explicit relation and logic inference (e.g., neural modular networks, neural symbolic methods, and graph-structured methods). Aside from the backbone techniques, we delve into the specific models and find out some common and useful insights either for video modeling, question answering, or for cross-modal correspondence learning. Finally, we point out the research trend of studying beyond factoid VideoQA to inference VideoQA, as well as towards the robustness and interpretability. Additionally, we maintain a repository, https://github.com/VRU-NExT/VideoQA, to keep trace of the latest VideoQA papers, datasets, and their open-source implementations if available. With these efforts, we strongly hope this survey could shed light on the follow-up VideoQA research.
Search
Co-authors
- Junbin Xiao 2
- Thong Nguyen 1
- Yi Bin 1
- Leigang Qu 1
- Jay Zhangjie Wu 1
- show all...