2024
pdf
bib
abs
Kandinsky 3: Text-to-Image Synthesis for Multifunctional Generative Framework
Arkhipkin Vladimir
|
Viacheslav Vasilev
|
Andrei Filatov
|
Igor Pavlov
|
Julia Agafonova
|
Nikolai Gerasimenko
|
Anna Averchenkova
|
Evelina Mironova
|
Bukashkin Anton
|
Konstantin Kulikov
|
Andrey Kuznetsov
|
Denis Dimitrov
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Text-to-image (T2I) diffusion models are popular for introducing image manipulation methods, such as editing, image fusion, inpainting, etc. At the same time, image-to-video (I2V) and text-to-video (T2V) models are also built on top of T2I models. We present Kandinsky 3, a novel T2I model based on latent diffusion, achieving a high level of quality and photorealism. The key feature of the new architecture is the simplicity and efficiency of its adaptation for many types of generation tasks. We extend the base T2I model for various applications and create a multifunctional generation system that includes text-guided inpainting/outpainting, image fusion, text-image fusion, image variations generation, I2V and T2V generation. We also present a distilled version of the T2I model, evaluating inference in 4 steps of the reverse process without reducing image quality and 3 times faster than the base model. We deployed a user-friendly demo system in which all the features can be tested in the public domain. Additionally, we released the source code and checkpoints for the Kandinsky 3 and extended models. Human evaluations show that Kandinsky 3 demonstrates one of the highest quality scores among open source generation systems.
2023
pdf
bib
abs
Kandinsky: An Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion
Anton Razzhigaev
|
Arseniy Shakhmatov
|
Anastasia Maltseva
|
Vladimir Arkhipkin
|
Igor Pavlov
|
Ilya Ryabov
|
Angelina Kuts
|
Alexander Panchenko
|
Andrey Kuznetsov
|
Denis Dimitrov
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Text-to-image generation is a significant domain in modern computer vision and achieved substantial improvements through the evolution of generative architectures. Among these, diffusion-based models demonstrated essential quality enhancements. These models generally split into two categories: pixel-level and latent-level approaches. We present Kandinsky – a novel exploration of latent diffusion architecture, combining the principles of image prior models with latent diffusion techniques. The image prior model, is trained separately to map CLIP text and image embeddings. Another distinct feature of the proposed model is the modified MoVQ implementation, which serves as the image autoencoder component. Overall the designed model contains 3.3B parameters. We also deployed a user-friendly demo system that supports diverse generative modes such as text-to-image generation, image fusion, text and image fusion, image variations generation and text-guided inpainting/outpainting. Additionally we released the source code and checkpoints for Kandinsky models. Experimental evaluations demonstrate FID score of 8.03 on the COCO-30K dataset, marking our model as the top open source performer in terms of measurable image generation quality.