Learning the Visualness of Text Using Large Vision-Language Models

Gaurav Verma, Ryan Rossi, Christopher Tensmeyer, Jiuxiang Gu, Ani Nenkova


Abstract
Visual text evokes an image in a person’s mind, while non-visual text fails to do so. A method to automatically detect visualness in text will enable text-to-image retrieval and generation models to augment text with relevant images. This is particularly challenging with long-form text as text-to-image generation and retrieval models are often triggered for text that is designed to be explicitly visual in nature, whereas long-form text could contain many non-visual sentences. To this end, we curate a dataset of 3,620 English sentences and their visualness scores provided by multiple human annotators. We also propose a fine-tuning strategy that adapts large vision-language models like CLIP by modifying the model’s contrastive learning objective to map text identified as non-visual to a common NULL image while matching visual text to their corresponding images in the document. We evaluate the proposed approach on its ability to (i) classify visual and non-visual text accurately, and (ii) attend over words that are identified as visual in psycholinguistic studies. Empirical evaluation indicates that our approach performs better than several heuristics and baseline models for the proposed task. Furthermore, to highlight the importance of modeling the visualness of text, we conduct qualitative analyses of text-to-image generation systems like DALL-E.
Anthology ID:
2023.emnlp-main.147
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2394–2408
Language:
URL:
https://aclanthology.org/2023.emnlp-main.147
DOI:
10.18653/v1/2023.emnlp-main.147
Bibkey:
Cite (ACL):
Gaurav Verma, Ryan Rossi, Christopher Tensmeyer, Jiuxiang Gu, and Ani Nenkova. 2023. Learning the Visualness of Text Using Large Vision-Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2394–2408, Singapore. Association for Computational Linguistics.
Cite (Informal):
Learning the Visualness of Text Using Large Vision-Language Models (Verma et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.147.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.147.mp4