SeqDialN: Sequential Visual Dialog Network in Joint Visual-Linguistic Representation Space

Liu Yang, Fanqi Meng, Xiao Liu, Ming-Kuang Daniel Wu, Vicent Ying, James Xu


Abstract
The key challenge of the visual dialog task is how to fuse features from multimodal sources and extract relevant information from dialog history to answer the current query. In this work, we formulate a visual dialog as an information flow in which each piece of information is encoded with the joint visual-linguistic representation of a single dialog round. Based on this formulation, we consider the visual dialog task as a sequence problem consisting of ordered visual-linguistic vectors. For featurization, we use a Dense SymmetricCo-Attention network (Nguyen and Okatani,2018) as a lightweight vison-language joint representation generator to fuse multimodal features (i.e., image and text), yielding better computation and data efficiencies. For inference, we propose two Sequential Dialog Networks (SeqDialN): the first uses LSTM(Hochreiter and Schmidhuber,1997) for information propagation (IP) and the second uses a modified Transformer (Vaswani et al.,2017) for multi-step reasoning (MR). Our architecture separates the complexity of multimodal feature fusion from that of inference, which allows simpler design of the inference engine. On VisDial v1.0 test-std dataset, our best single generative SeqDialN achieves 62.54% NDCG and 48.63% MRR; our ensemble generative SeqDialN achieves 63.78% NDCG and 49.98% MRR, which set a new state-of-the-art generative visual dialog model. We fine-tune discriminative SeqDialN with dense annotations and boost the performance up to 72.41% NDCG and 55.11% MRR. In this work, we discuss the extensive experiments we have conducted to demonstrate the effectiveness of our model components. We also provide visualization for the reasoning process from the relevant conversation rounds and discuss our fine-tuning methods. The code is available at https://github.com/xiaoxiaoheimei/SeqDialN.
Anthology ID:
2021.dialdoc-1.2
Volume:
Proceedings of the 1st Workshop on Document-grounded Dialogue and Conversational Question Answering (DialDoc 2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Song Feng, Siva Reddy, Malihe Alikhani, He He, Yangfeng Ji, Mohit Iyyer, Zhou Yu
Venue:
dialdoc
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8–17
Language:
URL:
https://aclanthology.org/2021.dialdoc-1.2
DOI:
10.18653/v1/2021.dialdoc-1.2
Bibkey:
Cite (ACL):
Liu Yang, Fanqi Meng, Xiao Liu, Ming-Kuang Daniel Wu, Vicent Ying, and James Xu. 2021. SeqDialN: Sequential Visual Dialog Network in Joint Visual-Linguistic Representation Space. In Proceedings of the 1st Workshop on Document-grounded Dialogue and Conversational Question Answering (DialDoc 2021), pages 8–17, Online. Association for Computational Linguistics.
Cite (Informal):
SeqDialN: Sequential Visual Dialog Network in Joint Visual-Linguistic Representation Space (Yang et al., dialdoc 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.dialdoc-1.2.pdf
Code
 xiaoxiaoheimei/SeqDialN
Data
VisDial