Fusion or Defusion? Flexible Vision-and-Language Pre-Training

Rongyi Sun, Ziran Li, Yifeng Ding, Qifan Wang, Jingang Wang, Haitao Zheng, Wei Wu, Yunsen Xian


Abstract
Existing approaches in the vision-and-language pre-training (VLP) paradigm mainly deploy either fusion-based encoders or dual-encoders, failing to achieve both effectiveness and efficiency in downstream multimodal tasks. In this paper, we build a flexible VLP model by incorporating cross-modal fusions into a dual-encoder architecture, where the introduced fusion modules can be easily decoupled from the dual encoder so as to switch the model to a fusion-free one. To better absorb cross-modal features from the fusion modules, we design a cross-modal knowledge transfer strategy along with other comprehensive pre-training tasks to guide the training process, which can further strengthen both the fusion-based and fusion-free representation learning. Extensive experiments conducted on various downstream vision-language tasks show that our proposed model is well-equipped with effectiveness as well as efficiency, demonstrating a superior performance compared with other strong VLP models.
Anthology ID:
2023.findings-acl.316
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5105–5119
Language:
URL:
https://aclanthology.org/2023.findings-acl.316
DOI:
10.18653/v1/2023.findings-acl.316
Bibkey:
Cite (ACL):
Rongyi Sun, Ziran Li, Yifeng Ding, Qifan Wang, Jingang Wang, Haitao Zheng, Wei Wu, and Yunsen Xian. 2023. Fusion or Defusion? Flexible Vision-and-Language Pre-Training. In Findings of the Association for Computational Linguistics: ACL 2023, pages 5105–5119, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Fusion or Defusion? Flexible Vision-and-Language Pre-Training (Sun et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.316.pdf