VIMI: Grounding Video Generation through Multi-modal Instruction

Yuwei Fang, Willi Menapace, Aliaksandr Siarohin, Tsai-Shien Chen, Kuan-Chieh Wang, Ivan Skorokhodov, Graham Neubig, Sergey Tulyakov


Abstract
Existing text-to-video diffusion models rely solely on text-only encoders for their pretraining. This limitation stems from the absence of large-scale multimodal prompt video datasets, resulting in a lack of visual grounding and restricting their versatility and application in multimodal integration. To address this, we construct a large-scale multimodal prompt dataset by employing retrieval methods to pair in-context examples with the given text prompts and then utilize a two-stage training strategy to enable diverse video generation tasks within a model. In the first stage, we propose a multimodal conditional video generation framework for pretraining on these augmented datasets, establishing a foundational model for grounded video generation. Secondly, we fine-tune the model from the first stage on various video generation tasks, incorporating multimodal instructions. This process further refines the model’s ability to handle diverse inputs and tasks, ensuring seamless integration of multimodal information. After this two-stage training process, VIMI demonstrates multimodal understanding capabilities, producing contextually rich and personalized videos grounded in the provided inputs, as shown in Figure1. Compared to previous subject-driven video generation methods, our generator can synthesize consistent and temporally coherent videos with large motion while retaining the semantic control. Our generator also achieves state-of-the-art text-to-video generation results on UCF101 benchmark.
Anthology ID:
2024.emnlp-main.254
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4444–4456
Language:
URL:
https://aclanthology.org/2024.emnlp-main.254
DOI:
10.18653/v1/2024.emnlp-main.254
Bibkey:
Cite (ACL):
Yuwei Fang, Willi Menapace, Aliaksandr Siarohin, Tsai-Shien Chen, Kuan-Chieh Wang, Ivan Skorokhodov, Graham Neubig, and Sergey Tulyakov. 2024. VIMI: Grounding Video Generation through Multi-modal Instruction. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 4444–4456, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
VIMI: Grounding Video Generation through Multi-modal Instruction (Fang et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.254.pdf