Yukun Zhu
2024
VIEWS: Entity-Aware News Video Captioning
Hammad Ayyubi
|
Tianqi Liu
|
Arsha Nagrani
|
Xudong Lin
|
Mingda Zhang
|
Anurag Arnab
|
Feng Han
|
Yukun Zhu
|
Xuande Feng
|
Kevin Zhang
|
Jialu Liu
|
Shih-Fu Chang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Existing popular video captioning benchmarks and models often produce generic captions for videos that lack specific identification of individuals, locations, or organizations (named entities). However, in the case of news videos, the setting is more demanding, requiring the inclusion of such named entities for meaningful summarization. Therefore, we introduce the task of directly summarizing news videos into captions that are entity-aware. To facilitate research in this area, we have collected a large-scale dataset named VIEWS (VIdeo NEWS). Within this task, we face challenges inherent to recognizing named entities and navigating diverse, dynamic contexts, all while relying solely on visual cues. To address these challenges, we propose a model-agnostic approach that enriches visual information extracted from videos with context sourced from external knowledge, enabling the generation of entity-aware captions. We validate the effectiveness of our approach across three video captioning models. Additionally, we conduct a critical analysis of our methodology to gain insights into the complexity of the task, the challenges it presents, and potential avenues for future research.
Search
Co-authors
- Hammad Ayyubi 1
- Tianqi Liu 1
- Arsha Nagrani 1
- Xudong Lin 1
- Mingda Zhang 1
- show all...