Riko Suzuki
2021
Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference
Riko Suzuki
|
Hitomi Yanaka
|
Koji Mineshima
|
Daisuke Bekki
Proceedings of the 1st Workshop on Multimodal Semantic Representations (MMSR)
This paper introduces a new video-and-language dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions. The dataset consists of 200 videos, 5,554 action labels, and 1,942 action triplets of the form (subject, predicate, object) that can be easily translated into logical semantic representations. The dataset is expected to be useful for evaluating multimodal inference systems between videos and semantically complicated sentences including negation and quantification.
2019
Multimodal Logical Inference System for Visual-Textual Entailment
Riko Suzuki
|
Hitomi Yanaka
|
Masashi Yoshikawa
|
Koji Mineshima
|
Daisuke Bekki
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
A large amount of research about multimodal inference across text and vision has been recently developed to obtain visually grounded word and sentence representations. In this paper, we use logic-based representations as unified meaning representations for texts and images and present an unsupervised multimodal logical inference system that can effectively prove entailment relations between them. We show that by combining semantic parsing and theorem proving, the system can handle semantically complex sentences for visual-textual inference.
Search