Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions

Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, Kai-Wei Chang


Abstract
Pre-trained contextual vision-and-language (V&L) models have achieved impressive performance on various benchmarks. However, existing models require a large amount of parallel image-caption data for pre-training. Such data are costly to collect and require cumbersome curation. Inspired by unsupervised machine translation, we investigate if a strong V&L representation model can be learned through unsupervised pre-training without image-caption corpora. In particular, we propose to conduct “mask-and-predict” pre-training on text-only and image-only corpora and introduce the object tags detected by an object recognition model as anchor points to bridge two modalities. We find that such a simple approach achieves performance close to a model pre-trained with aligned data, on four English V&L benchmarks. Our work challenges the widely held notion that aligned data is necessary for V&L pre-training, while significantly reducing the amount of supervision needed for V&L models.
Anthology ID:
2021.naacl-main.420
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5339–5350
Language:
URL:
https://aclanthology.org/2021.naacl-main.420
DOI:
10.18653/v1/2021.naacl-main.420
Bibkey:
Cite (ACL):
Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, and Kai-Wei Chang. 2021. Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5339–5350, Online. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions (Li et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.420.pdf
Optional supplementary data:
 2021.naacl-main.420.OptionalSupplementaryData.zip
Video:
 https://aclanthology.org/2021.naacl-main.420.mp4
Data
BookCorpusConceptual CaptionsRefCOCOVisual Question Answering v2.0