%0 Conference Proceedings %T CLEVR-Dialog: A Diagnostic Dataset for Multi-Round Reasoning in Visual Dialog %A Kottur, Satwik %A Moura, José M. F. %A Parikh, Devi %A Batra, Dhruv %A Rohrbach, Marcus %Y Burstein, Jill %Y Doran, Christy %Y Solorio, Thamar %S Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) %D 2019 %8 June %I Association for Computational Linguistics %C Minneapolis, Minnesota %F kottur-etal-2019-clevr %X Visual Dialog is a multimodal task of answering a sequence of questions grounded in an image (using the conversation history as context). It entails challenges in vision, language, reasoning, and grounding. However, studying these subtasks in isolation on large, real datasets is infeasible as it requires prohibitively-expensive complete annotation of the ‘state’ of all images and dialogs. We develop CLEVR-Dialog, a large diagnostic dataset for studying multi-round reasoning in visual dialog. Specifically, we construct a dialog grammar that is grounded in the scene graphs of the images from the CLEVR dataset. This combination results in a dataset where all aspects of the visual dialog are fully annotated. In total, CLEVR-Dialog contains 5 instances of 10-round dialogs for about 85k CLEVR images, totaling to 4.25M question-answer pairs. We use CLEVR-Dialog to benchmark performance of standard visual dialog models; in particular, on visual coreference resolution (as a function of the coreference distance). This is the first analysis of its kind for visual dialog models that was not possible without this dataset. We hope the findings from CLEVR-Dialog will help inform the development of future models for visual dialog. Our code and dataset are publicly available. %R 10.18653/v1/N19-1058 %U https://aclanthology.org/N19-1058 %U https://doi.org/10.18653/v1/N19-1058 %P 582-595