Unsupervised Natural Language Inference via Decoupled Multimodal Contrastive Learning

Wanyun Cui, Guangyu Zheng, Wei Wang


Abstract
We propose to solve the natural language inference problem without any supervision from the inference labels via task-agnostic multimodal pretraining. Although recent studies of multimodal self-supervised learning also represent the linguistic and visual context, their encoders for different modalities are coupled. Thus they cannot incorporate visual information when encoding plain text alone. In this paper, we propose Multimodal Aligned Contrastive Decoupled learning (MACD) network. MACD forces the decoupled text encoder to represent the visual information via contrastive learning. Therefore, it embeds visual knowledge even for plain text inference. We conducted comprehensive experiments over plain text inference datasets (i.e. SNLI and STS-B). The unsupervised MACD even outperforms the fully-supervised BiLSTM and BiLSTM+ELMO on STS-B.
Anthology ID:
2020.emnlp-main.444
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5511–5520
Language:
URL:
https://aclanthology.org/2020.emnlp-main.444
DOI:
10.18653/v1/2020.emnlp-main.444
Bibkey:
Cite (ACL):
Wanyun Cui, Guangyu Zheng, and Wei Wang. 2020. Unsupervised Natural Language Inference via Decoupled Multimodal Contrastive Learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5511–5520, Online. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Natural Language Inference via Decoupled Multimodal Contrastive Learning (Cui et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.444.pdf
Video:
 https://slideslive.com/38939293
Code
 GuangyuZheng/MACD
Data
Flickr30kGLUEMS COCO