Video-Language Understanding: A Survey from Model Architecture, Model Training, and Data Perspectives

Thong Nguyen, Yi Bin, Junbin Xiao, Leigang Qu, Yicong Li, Jay Zhangjie Wu, Cong-Duy Nguyen, See-Kiong Ng, Anh Tuan Luu


Abstract
Humans use multiple senses to comprehend the environment. Vision and language are two of the most vital senses since they allow us to easily communicate our thoughts and perceive the world around us. There has been a lot of interest in creating video-language understanding systems with human-like senses since a video-language pair can mimic both our linguistic medium and visual environment with temporal dynamics. In this survey, we review the key tasks of these systems and highlight the associated challenges. Based on the challenges, we summarize their methods from model architecture, model training, and data perspectives. We also conduct performance comparison among the methods, and discuss promising directions for future research.
Anthology ID:
2024.findings-acl.217
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3636–3657
Language:
URL:
https://aclanthology.org/2024.findings-acl.217
DOI:
10.18653/v1/2024.findings-acl.217
Bibkey:
Cite (ACL):
Thong Nguyen, Yi Bin, Junbin Xiao, Leigang Qu, Yicong Li, Jay Zhangjie Wu, Cong-Duy Nguyen, See-Kiong Ng, and Anh Tuan Luu. 2024. Video-Language Understanding: A Survey from Model Architecture, Model Training, and Data Perspectives. In Findings of the Association for Computational Linguistics: ACL 2024, pages 3636–3657, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Video-Language Understanding: A Survey from Model Architecture, Model Training, and Data Perspectives (Nguyen et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.217.pdf