Overcoming Language Priors in Visual Question Answering via Distinguishing Superficially Similar Instances

Yike Wu, Yu Zhao, Shiwan Zhao, Ying Zhang, Xiaojie Yuan, Guoqing Zhao, Ning Jiang


Abstract
Despite the great progress of Visual Question Answering (VQA), current VQA models heavily rely on the superficial correlation between the question type and its corresponding frequent answers (i.e., language priors) to make predictions, without really understanding the input. In this work, we define the training instances with the same question type but different answers as superficially similar instances, and attribute the language priors to the confusion of VQA model on such instances. To solve this problem, we propose a novel training framework that explicitly encourages the VQA model to distinguish between the superficially similar instances. Specifically, for each training instance, we first construct a set that contains its superficially similar counterparts. Then we exploit the proposed distinguishing module to increase the distance between the instance and its counterparts in the answer space. In this way, the VQA model is forced to further focus on the other parts of the input beyond the question type, which helps to overcome the language priors. Experimental results show that our method achieves the state-of-the-art performance on VQA-CP v2. Codes are available at Distinguishing-VQA.
Anthology ID:
2022.coling-1.503
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5721–5729
Language:
URL:
https://aclanthology.org/2022.coling-1.503
DOI:
Bibkey:
Cite (ACL):
Yike Wu, Yu Zhao, Shiwan Zhao, Ying Zhang, Xiaojie Yuan, Guoqing Zhao, and Ning Jiang. 2022. Overcoming Language Priors in Visual Question Answering via Distinguishing Superficially Similar Instances. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5721–5729, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Overcoming Language Priors in Visual Question Answering via Distinguishing Superficially Similar Instances (Wu et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.503.pdf
Code
 wyk-nku/distinguishing-vqa
Data
Visual Question Answering v2.0