Minimal Clips, Maximum Salience: Long Video Summarization via Key Moment Extraction

Galann Pennec, Zhengyuan Liu, Nicholas Asher, Philippe Muller, Nancy Chen


Abstract
Vision-Language Models (VLMs) are able to process increasingly longer videos. Yet, important visual information is easily lost throughout the entire context and missed by VLMs. Also, it is important to design tools that enable cost-effective analysis of lengthy video content. In this paper, we propose a clip selection method that targets key video moments to be included in a multimodal summary. We divide the video into short clips and generate compact visual descriptions of each using a lightweight video captioning model. These are then passed to a large language model (LLM), which selects the K clips containing the most relevant visual information for a multimodal summary. We evaluate our approach on reference clips for the task, automatically derived from full human-annotated screenplays and summaries in the MovieSum dataset. We further show that these reference clips (less than 6% of the movie) are sufficient to build a complete multimodal summary of the movies in MovieSum. Using our clip selection method, we achieve a summarization performance close to that of these reference clips while capturing substantially more relevant video information than random clip selection. Importantly, we maintain low computational cost by relying on a lightweight captioning model.
Anthology ID:
2026.iwsds-1.22
Volume:
Proceedings of the 16th International Workshop on Spoken Dialogue System Technology
Month:
February
Year:
2026
Address:
Trento, Italy
Editors:
Giuseppe Riccardi, Seyed Mahed Mousavi, Maria Ines Torres, Koichiro Yoshino, Zoraida Callejas, Shammur Absar Chowdhury, Yun-Nung Chen, Frederic Bechet, Joakim Gustafson, Géraldine Damnati, Alex Papangelis, Luis Fernando D’Haro, John Mendonça, Raffaella Bernardi, Dilek Hakkani-Tur, Giuseppe "Pino" Di Fabbrizio, Tatsuya Kawahara, Firoj Alam, Gokhan Tur, Michael Johnston
Venue:
IWSDS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
205–221
Language:
URL:
https://aclanthology.org/2026.iwsds-1.22/
DOI:
Bibkey:
Cite (ACL):
Galann Pennec, Zhengyuan Liu, Nicholas Asher, Philippe Muller, and Nancy Chen. 2026. Minimal Clips, Maximum Salience: Long Video Summarization via Key Moment Extraction. In Proceedings of the 16th International Workshop on Spoken Dialogue System Technology, pages 205–221, Trento, Italy. Association for Computational Linguistics.
Cite (Informal):
Minimal Clips, Maximum Salience: Long Video Summarization via Key Moment Extraction (Pennec et al., IWSDS 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.iwsds-1.22.pdf