Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference

Riko Suzuki, Hitomi Yanaka, Koji Mineshima, Daisuke Bekki


Abstract
This paper introduces a new video-and-language dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions. The dataset consists of 200 videos, 5,554 action labels, and 1,942 action triplets of the form (subject, predicate, object) that can be easily translated into logical semantic representations. The dataset is expected to be useful for evaluating multimodal inference systems between videos and semantically complicated sentences including negation and quantification.
Anthology ID:
2021.mmsr-1.10
Volume:
Proceedings of the 1st Workshop on Multimodal Semantic Representations (MMSR)
Month:
June
Year:
2021
Address:
Groningen, Netherlands (Online)
Editors:
Lucia Donatelli, Nikhil Krishnaswamy, Kenneth Lai, James Pustejovsky
Venue:
MMSR
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
102–107
Language:
URL:
https://aclanthology.org/2021.mmsr-1.10
DOI:
Bibkey:
Cite (ACL):
Riko Suzuki, Hitomi Yanaka, Koji Mineshima, and Daisuke Bekki. 2021. Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference. In Proceedings of the 1st Workshop on Multimodal Semantic Representations (MMSR), pages 102–107, Groningen, Netherlands (Online). Association for Computational Linguistics.
Cite (Informal):
Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference (Suzuki et al., MMSR 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.mmsr-1.10.pdf
Code
 rikos3/HumanActions
Data
CharadesViolin