Weakly Supervised Vision-and-Language Pre-training with Relative Representations

Chi Chen, Peng Li, Maosong Sun, Yang Liu


Abstract
Weakly supervised vision-and-language pre-training (WVLP), which learns cross-modal representations with limited cross-modal supervision, has been shown to effectively reduce the data cost of pre-training while maintaining decent performance on downstream tasks. However, current WVLP methods use only local descriptions of images, i.e., object tags, as cross-modal anchors to construct weakly-aligned image-text pairs for pre-training. This affects the data quality and thus the effectiveness of pre-training. In this paper, we propose to directly take a small number of aligned image-text pairs as anchors, and represent each unaligned image and text by its similarities to these anchors, i.e., relative representations. We build a WVLP framework based on the relative representations, namely RELIT, which collects high-quality weakly-aligned image-text pairs from large-scale image-only and text-only data for pre-training through relative representation-based retrieval and generation. Experiments on four downstream tasks show that RELIT achieves new state-of-the-art results under the weakly supervised setting.
Anthology ID:
2023.acl-long.464
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8341–8355
Language:
URL:
https://aclanthology.org/2023.acl-long.464
DOI:
10.18653/v1/2023.acl-long.464
Bibkey:
Cite (ACL):
Chi Chen, Peng Li, Maosong Sun, and Yang Liu. 2023. Weakly Supervised Vision-and-Language Pre-training with Relative Representations. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8341–8355, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Weakly Supervised Vision-and-Language Pre-training with Relative Representations (Chen et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.464.pdf
Video:
 https://aclanthology.org/2023.acl-long.464.mp4