DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training

Luyang Huang, Guocheng Niu, Jiachen Liu, Xinyan Xiao, Hua Wu


Abstract
Due to the limitations of the model structure and pre-training objectives, existing vision-and-language generation models cannot utilize pair-wise images and text through bi-directional generation. In this paper, we propose DU-VLG, a framework which unifies vision-and-language generation as sequence generation problems. DU-VLG is trained with novel dual pre-training tasks: multi-modal denoising autoencoder tasks and modality translation tasks. To bridge the gap between image understanding and generation, we further design a novel commitment loss. We compare pre-training objectives on image captioning and text-to-image generation datasets. Results show that DU-VLG yields better performance than variants trained with uni-directional generation objectives or the variant without the commitment loss. We also obtain higher scores compared to previous state-of-the-art systems on three vision-and-language generation tasks. In addition, human judges further confirm that our model generates real and relevant images as well as faithful and informative captions.
Anthology ID:
2022.findings-acl.201
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2552–2566
Language:
URL:
https://aclanthology.org/2022.findings-acl.201
DOI:
10.18653/v1/2022.findings-acl.201
Bibkey:
Cite (ACL):
Luyang Huang, Guocheng Niu, Jiachen Liu, Xinyan Xiao, and Hua Wu. 2022. DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2552–2566, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training (Huang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.201.pdf
Data
MS COCO