Integrating Visuospatial, Linguistic, and Commonsense Structure into Story Visualization

Adyasha Maharana, Mohit Bansal


Abstract
While much research has been done in text-to-image synthesis, little work has been done to explore the usage of linguistic structure of the input text. Such information is even more important for story visualization since its inputs have an explicit narrative structure that needs to be translated into an image sequence (or visual story). Prior work in this domain has shown that there is ample room for improvement in the generated image sequence in terms of visual quality, consistency and relevance. In this paper, we first explore the use of constituency parse trees using a Transformer-based recurrent architecture for encoding structured input. Second, we augment the structured input with commonsense information and study the impact of this external knowledge on the generation of visual story. Third, we also incorporate visual structure via bounding boxes and dense captioning to provide feedback about the characters/objects in generated images within a dual learning setup. We show that off-the-shelf dense-captioning models trained on Visual Genome can improve the spatial structure of images from a different target domain without needing fine-tuning. We train the model end-to-end using intra-story contrastive loss (between words and image sub-regions) and show significant improvements in visual quality. Finally, we provide an analysis of the linguistic and visuo-spatial information.
Anthology ID:
2021.emnlp-main.543
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6772–6786
Language:
URL:
https://aclanthology.org/2021.emnlp-main.543
DOI:
10.18653/v1/2021.emnlp-main.543
Bibkey:
Cite (ACL):
Adyasha Maharana and Mohit Bansal. 2021. Integrating Visuospatial, Linguistic, and Commonsense Structure into Story Visualization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6772–6786, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Integrating Visuospatial, Linguistic, and Commonsense Structure into Story Visualization (Maharana & Bansal, EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.543.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.543.mp4
Code
 adymaharana/vlcstorygan
Data
ConceptNetVisual Genome