Reasoning Over History: Context Aware Visual Dialog

Muhammad Shah, Shikib Mehri, Tejas Srinivasan


Abstract
While neural models have been shown to exhibit strong performance on single-turn visual question answering (VQA) tasks, extending VQA to a multi-turn, conversational setting remains a challenge. One way to address this challenge is to augment existing strong neural VQA models with the mechanisms that allow them to retain information from previous dialog turns. One strong VQA model is the MAC network, which decomposes a task into a series of attention-based reasoning steps. However, since the MAC network is designed for single-turn question answering, it is not capable of referring to past dialog turns. More specifically, it struggles with tasks that require reasoning over the dialog history, particularly coreference resolution. We extend the MAC network architecture with Context-aware Attention and Memory (CAM), which attends over control states in past dialog turns to determine the necessary reasoning operations for the current question. MAC nets with CAM achieve up to 98.25% accuracy on the CLEVR-Dialog dataset, beating the existing state-of-the-art by 30% (absolute). Our error analysis indicates that with CAM, the model’s performance particularly improved on questions that required coreference resolution.
Anthology ID:
2020.nlpbt-1.9
Volume:
Proceedings of the First International Workshop on Natural Language Processing Beyond Text
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | nlpbt
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
75–83
Language:
URL:
https://aclanthology.org/2020.nlpbt-1.9
DOI:
10.18653/v1/2020.nlpbt-1.9
Bibkey:
Cite (ACL):
Muhammad Shah, Shikib Mehri, and Tejas Srinivasan. 2020. Reasoning Over History: Context Aware Visual Dialog. In Proceedings of the First International Workshop on Natural Language Processing Beyond Text, pages 75–83, Online. Association for Computational Linguistics.
Cite (Informal):
Reasoning Over History: Context Aware Visual Dialog (Shah et al., nlpbt 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.nlpbt-1.9.pdf
Video:
 https://slideslive.com/38939783
Data
CLEVRCLEVR-DialogVisDialVisual Question Answering