Visual Storytelling with Question-Answer Plans

Danyang Liu, Mirella Lapata, Frank Keller


Abstract
Visual storytelling aims to generate compelling narratives from image sequences. Existing models often focus on enhancing the representation of the image sequence, e.g., with external knowledge sources or advanced graph structures. Despite recent progress, the stories are often repetitive, illogical, and lacking in detail. To mitigate these issues, we present a novel framework which integrates visual representations with pretrained language models and planning. Our model translates the image sequence into a visual prefix, a sequence of continuous embeddings which language models can interpret. It also leverages a sequence of question-answer pairs as a blueprint plan for selecting salient visual concepts and determining how they should be assembled into a narrative. Automatic and human evaluation on the VIST benchmark demonstrates that blueprint-based models generate stories that are more coherent, interesting, and natural compared to competitive baselines and state-of-the-art systems.
Anthology ID:
2023.findings-emnlp.386
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5800–5813
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.386
DOI:
10.18653/v1/2023.findings-emnlp.386
Bibkey:
Cite (ACL):
Danyang Liu, Mirella Lapata, and Frank Keller. 2023. Visual Storytelling with Question-Answer Plans. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5800–5813, Singapore. Association for Computational Linguistics.
Cite (Informal):
Visual Storytelling with Question-Answer Plans (Liu et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.386.pdf