Cristina Noujaim
2020
LifeQA: A Real-life Dataset for Video Question Answering
Santiago Castro
|
Mahmoud Azab
|
Jonathan Stroud
|
Cristina Noujaim
|
Ruoyao Wang
|
Jia Deng
|
Rada Mihalcea
Proceedings of the Twelfth Language Resources and Evaluation Conference
We introduce LifeQA, a benchmark dataset for video question answering that focuses on day-to-day real-life situations. Current video question answering datasets consist of movies and TV shows. However, it is well-known that these visual domains are not representative of our day-to-day lives. Movies and TV shows, for example, benefit from professional camera movements, clean editing, crisp audio recordings, and scripted dialog between professional actors. While these domains provide a large amount of data for training models, their properties make them unsuitable for testing real-life question answering systems. Our dataset, by contrast, consists of video clips that represent only real-life scenarios. We collect 275 such video clips and over 2.3k multiple-choice questions. In this paper, we analyze the challenging but realistic aspects of LifeQA, and we apply several state-of-the-art video question answering models to provide benchmarks for future research. The full dataset is publicly available at https://lit.eecs.umich.edu/lifeqa/.
Search
Co-authors
- Santiago Castro 1
- Mahmoud Azab 1
- Jonathan Stroud 1
- Ruoyao Wang 1
- Jia Deng 1
- show all...
Venues
- lrec1