ViPE: Visualise Pretty-much Everything

Hassan Shahmohammadi, Adhiraj Ghosh, Hendrik Lensch


Abstract
Figurative and non-literal expressions are profoundly integrated in human communication. Visualising such expressions allow us to convey our creative thoughts, and evoke nuanced emotions. Recent text-to-image models like Stable Diffusion, on the other hand, struggle to depict non-literal expressions. Recent works primarily deal with this issue by compiling humanly annotated datasets on a small scale, which not only demands specialized expertise but also proves highly inefficient. To address this issue, we introduce ViPE: Visualise Pretty-much Everything. ViPE offers a series of lightweight and robust language models that have been trained on a large-scale set of lyrics with noisy visual descriptions that represent their implicit meaning. The synthetic visual descriptions are generated by GPT3.5 relying on neither human annotations nor images. ViPE effectively expresses any arbitrary piece of text into a visualisable description, enabling meaningful and high-quality image generation. We provide compelling evidence that ViPE is more robust than GPT3.5 in synthesising visual elaborations. ViPE also exhibits an understanding of figurative expressions comparable to human experts, providing a powerful and open-source backbone to many downstream applications such as music video and caption generation.
Anthology ID:
2023.emnlp-main.333
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5477–5494
Language:
URL:
https://aclanthology.org/2023.emnlp-main.333
DOI:
10.18653/v1/2023.emnlp-main.333
Bibkey:
Cite (ACL):
Hassan Shahmohammadi, Adhiraj Ghosh, and Hendrik Lensch. 2023. ViPE: Visualise Pretty-much Everything. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5477–5494, Singapore. Association for Computational Linguistics.
Cite (Informal):
ViPE: Visualise Pretty-much Everything (Shahmohammadi et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.333.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.333.mp4