%0 Conference Proceedings %T Generating Question Relevant Captions to Aid Visual Question Answering %A Wu, Jialin %A Hu, Zeyuan %A Mooney, Raymond %Y Korhonen, Anna %Y Traum, David %Y Màrquez, Lluís %S Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics %D 2019 %8 July %I Association for Computational Linguistics %C Florence, Italy %F wu-etal-2019-generating %X Visual question answering (VQA) and image captioning require a shared body of general knowledge connecting language and vision. We present a novel approach to better VQA performance that exploits this connection by jointly generating captions that are targeted to help answer a specific visual question. The model is trained using an existing caption dataset by automatically determining question-relevant captions using an online gradient-based method. Experimental results on the VQA v2 challenge demonstrates that our approach obtains state-of-the-art VQA performance (e.g. 68.4% in the Test-standard set using a single model) by simultaneously generating question-relevant captions. %R 10.18653/v1/P19-1348 %U https://aclanthology.org/P19-1348 %U https://doi.org/10.18653/v1/P19-1348 %P 3585-3594