TVQA+: Spatio-Temporal Grounding for Video Question Answering

Jie Lei, Licheng Yu, Tamara Berg, Mohit Bansal


Abstract
We present the task of Spatio-Temporal Video Question Answering, which requires intelligent systems to simultaneously retrieve relevant moments and detect referenced visual concepts (people and objects) to answer natural language questions about videos. We first augment the TVQA dataset with 310.8K bounding boxes, linking depicted objects to visual concepts in questions and answers. We name this augmented version as TVQA+. We then propose Spatio-Temporal Answerer with Grounded Evidence (STAGE), a unified framework that grounds evidence in both spatial and temporal domains to answer questions about videos. Comprehensive experiments and analyses demonstrate the effectiveness of our framework and how the rich annotations in our TVQA+ dataset can contribute to the question answering task. Moreover, by performing this joint task, our model is able to produce insightful and interpretable spatio-temporal attention visualizations.
Anthology ID:
2020.acl-main.730
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8211–8225
Language:
URL:
https://aclanthology.org/2020.acl-main.730
DOI:
10.18653/v1/2020.acl-main.730
Bibkey:
Cite (ACL):
Jie Lei, Licheng Yu, Tamara Berg, and Mohit Bansal. 2020. TVQA+: Spatio-Temporal Grounding for Video Question Answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8211–8225, Online. Association for Computational Linguistics.
Cite (Informal):
TVQA+: Spatio-Temporal Grounding for Video Question Answering (Lei et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.730.pdf
Video:
 http://slideslive.com/38929082
Code
 jayleicn/TVQAplus +  additional community code
Data
TVQA+MovieFIBMovieQATVQAVisual Question Answering