%0 Conference Proceedings %T Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions %A Li, Liunian Harold %A You, Haoxuan %A Wang, Zhecan %A Zareian, Alireza %A Chang, Shih-Fu %A Chang, Kai-Wei %Y Toutanova, Kristina %Y Rumshisky, Anna %Y Zettlemoyer, Luke %Y Hakkani-Tur, Dilek %Y Beltagy, Iz %Y Bethard, Steven %Y Cotterell, Ryan %Y Chakraborty, Tanmoy %Y Zhou, Yichao %S Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies %D 2021 %8 June %I Association for Computational Linguistics %C Online %F li-etal-2021-unsupervised %X Pre-trained contextual vision-and-language (V&L) models have achieved impressive performance on various benchmarks. However, existing models require a large amount of parallel image-caption data for pre-training. Such data are costly to collect and require cumbersome curation. Inspired by unsupervised machine translation, we investigate if a strong V&L representation model can be learned through unsupervised pre-training without image-caption corpora. In particular, we propose to conduct “mask-and-predict” pre-training on text-only and image-only corpora and introduce the object tags detected by an object recognition model as anchor points to bridge two modalities. We find that such a simple approach achieves performance close to a model pre-trained with aligned data, on four English V&L benchmarks. Our work challenges the widely held notion that aligned data is necessary for V&L pre-training, while significantly reducing the amount of supervision needed for V&L models. %R 10.18653/v1/2021.naacl-main.420 %U https://aclanthology.org/2021.naacl-main.420 %U https://doi.org/10.18653/v1/2021.naacl-main.420 %P 5339-5350