Sparsh Mittal


2024

pdf bib
GRIZAL: Generative Prior-guided Zero-Shot Temporal Action Localization
Onkar Susladkar | Gayatri Deshmukh | Vandan Gorade | Sparsh Mittal
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Zero-shot temporal action localization (TAL) aims to temporally localize actions in videos without prior training examples. To address the challenges of TAL, we offer GRIZAL, a model that uses multimodal embeddings and dynamic motion cues to localize actions effectively. GRIZAL achieves sample diversity by using large-scale generative models such as GPT-4 for generating textual augmentations and DALL-E for generating image augmentations. Our model integrates vision-language embeddings with optical flow insights, optimized through a blend of supervised and self-supervised loss functions. On ActivityNet, Thumos14 and Charades-STA datasets, GRIZAL greatly outperforms state-of-the-art zero-shot TAL models, demonstrating its robustness and adaptability across a wide range of video content. We will make all the models and code publicly available by open-sourcing them.