Context-aware Visual Storytelling with Visual Prefix Tuning and Contrastive Learning

Yingjin Song, Denis Paperno, Albert Gatt


Abstract
Visual storytelling systems generate multi-sentence stories from image sequences. In this task, capturing contextual information and bridging visual variation bring additional challenges. We propose a simple yet effective framework that leverages the generalization capabilities of pretrained foundation models, only training a lightweight vision-language mapping network to connect modalities, while incorporating context to enhance coherence. We introduce a multimodal contrastive objective that also improves visual relevance and story informativeness. Extensive experimental results, across both automatic metrics and human evaluations, demonstrate that the stories generated by our framework are diverse, coherent, informative, and interesting.
Anthology ID:
2024.inlg-main.32
Volume:
Proceedings of the 17th International Natural Language Generation Conference
Month:
September
Year:
2024
Address:
Tokyo, Japan
Editors:
Saad Mahamood, Nguyen Le Minh, Daphne Ippolito
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
384–401
Language:
URL:
https://aclanthology.org/2024.inlg-main.32
DOI:
Bibkey:
Cite (ACL):
Yingjin Song, Denis Paperno, and Albert Gatt. 2024. Context-aware Visual Storytelling with Visual Prefix Tuning and Contrastive Learning. In Proceedings of the 17th International Natural Language Generation Conference, pages 384–401, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Context-aware Visual Storytelling with Visual Prefix Tuning and Contrastive Learning (Song et al., INLG 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.inlg-main.32.pdf