Cross-Modality Relevance for Reasoning on Language and Vision

Chen Zheng, Quan Guo, Parisa Kordjamshidi


Abstract
This work deals with the challenge of learning and reasoning over language and vision data for the related downstream tasks such as visual question answering (VQA) and natural language for visual reasoning (NLVR). We design a novel cross-modality relevance module that is used in an end-to-end framework to learn the relevance representation between components of various input modalities under the supervision of a target task, which is more generalizable to unobserved data compared to merely reshaping the original representation space. In addition to modeling the relevance between the textual entities and visual entities, we model the higher-order relevance between entity relations in the text and object relations in the image. Our proposed approach shows competitive performance on two different language and vision tasks using public benchmarks and improves the state-of-the-art published results. The learned alignments of input spaces and their relevance representations by NLVR task boost the training efficiency of VQA task.
Anthology ID:
2020.acl-main.683
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7642–7651
Language:
URL:
https://aclanthology.org/2020.acl-main.683
DOI:
10.18653/v1/2020.acl-main.683
Bibkey:
Cite (ACL):
Chen Zheng, Quan Guo, and Parisa Kordjamshidi. 2020. Cross-Modality Relevance for Reasoning on Language and Vision. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7642–7651, Online. Association for Computational Linguistics.
Cite (Informal):
Cross-Modality Relevance for Reasoning on Language and Vision (Zheng et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.683.pdf
Video:
 http://slideslive.com/38929203
Code
 HLR/Cross_Modality_Relevance
Data
NLVRVisual Question Answering