Identifying Visible Actions in Lifestyle Vlogs

Oana Ignat, Laura Burdick, Jia Deng, Rada Mihalcea


Abstract
We consider the task of identifying human actions visible in online videos. We focus on the widely spread genre of lifestyle vlogs, which consist of videos of people performing actions while verbally describing them. Our goal is to identify if actions mentioned in the speech description of a video are visually present. We construct a dataset with crowdsourced manual annotations of visible actions, and introduce a multimodal algorithm that leverages information derived from visual and linguistic clues to automatically infer which actions are visible in a video.
Anthology ID:
P19-1643
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6406–6417
Language:
URL:
https://aclanthology.org/P19-1643
DOI:
10.18653/v1/P19-1643
Bibkey:
Cite (ACL):
Oana Ignat, Laura Burdick, Jia Deng, and Rada Mihalcea. 2019. Identifying Visible Actions in Lifestyle Vlogs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6406–6417, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Identifying Visible Actions in Lifestyle Vlogs (Ignat et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1643.pdf
Code
 MichiganNLP/vlog_action_recognition
Data
Vlogs