UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion

Wei Li, Xue Xu, Jiachen Liu, Xinyan Xiao


Abstract
Existing text-to-image diffusion models primarily generate images from text prompts. However, the inherent conciseness of textual descriptions poses challenges in faithfully synthesizing images with intricate details, such as specific entities or scenes. This paper presents UNIMO-G, a simple multimodal conditional diffusion framework that operates on multimodal prompts with interleaved textual and visual inputs, which demonstrates a unified ability for both text-driven and subject-driven image generation. UNIMO-G comprises two core components: a Multimodal Large Language Model (MLLM) for encoding multimodal prompts, and a conditional denoising diffusion network for generating images based on the encoded multimodal input. We leverage a two-stage training strategy to effectively train the framework: firstly pre-training on large-scale text-image pairs to develop conditional image generation capabilities, and then instruction tuning with multimodal prompts to achieve unified image generation proficiency. A well-designed data processing pipeline involving language grounding and image segmentation is employed to construct multi-modal prompts. UNIMO-G excels in both text-to-image generation and zero-shot subject-driven synthesis, and is notably effective in generating high-fidelity images from complex multimodal prompts involving multiple image entities.
Anthology ID:
2024.acl-long.335
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6173–6188
Language:
URL:
https://aclanthology.org/2024.acl-long.335
DOI:
Bibkey:
Cite (ACL):
Wei Li, Xue Xu, Jiachen Liu, and Xinyan Xiao. 2024. UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6173–6188, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion (Li et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.335.pdf