Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog

Zhe Gan, Yu Cheng, Ahmed Kholy, Linjie Li, Jingjing Liu, Jianfeng Gao


Abstract
This paper presents a new model for visual dialog, Recurrent Dual Attention Network (ReDAN), using multi-step reasoning to answer a series of questions about an image. In each question-answering turn of a dialog, ReDAN infers the answer progressively through multiple reasoning steps. In each step of the reasoning process, the semantic representation of the question is updated based on the image and the previous dialog history, and the recurrently-refined representation is used for further reasoning in the subsequent step. On the VisDial v1.0 dataset, the proposed ReDAN model achieves a new state-of-the-art of 64.47% NDCG score. Visualization on the reasoning process further demonstrates that ReDAN can locate context-relevant visual and textual clues via iterative refinement, which can lead to the correct answer step-by-step.
Anthology ID:
P19-1648
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6463–6474
Language:
URL:
https://aclanthology.org/P19-1648
DOI:
10.18653/v1/P19-1648
Bibkey:
Cite (ACL):
Zhe Gan, Yu Cheng, Ahmed Kholy, Linjie Li, Jingjing Liu, and Jianfeng Gao. 2019. Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6463–6474, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog (Gan et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1648.pdf
Data
GuessWhat?!VisDialVisual Question Answering