Weitao Jiang
2020
Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering
Zujie Liang
|
Weitao Jiang
|
Haifeng Hu
|
Jiaying Zhu
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
In the task of Visual Question Answering (VQA), most state-of-the-art models tend to learn spurious correlations in the training set and achieve poor performance in out-of-distribution test data. Some methods of generating counterfactual samples have been proposed to alleviate this problem. However, the counterfactual samples generated by most previous methods are simply added to the training data for augmentation and are not fully utilized. Therefore, we introduce a novel self-supervised contrastive learning mechanism to learn the relationship between original samples, factual samples and counterfactual samples. With the better cross-modal joint embeddings learned from the auxiliary training objective, the reasoning capability and robustness of the VQA model are boosted significantly. We evaluate the effectiveness of our method by surpassing current state-of-the-art models on the VQA-CP dataset, a diagnostic benchmark for assessing the VQA model’s robustness.
Search