Lovish Chum
2021
Joint Multimedia Event Extraction from Video and Article
Brian Chen
|
Xudong Lin
|
Christopher Thomas
|
Manling Li
|
Shoya Yoshida
|
Lovish Chum
|
Heng Ji
|
Shih-Fu Chang
Findings of the Association for Computational Linguistics: EMNLP 2021
Visual and textual modalities contribute complementary information about events described in multimedia documents. Videos contain rich dynamics and detailed unfoldings of events, while text describes more high-level and abstract concepts. However, existing event extraction methods either do not handle video or solely target video while ignoring other modalities. In contrast, we propose the first approach to jointly extract events from both video and text articles. We introduce the new task of Video MultiMedia Event Extraction and propose two novel components to build the first system towards this task. First, we propose the first self-supervised cross-modal event coreference model that can determine coreference between video events and text events without any manually annotated pairs. Second, we introduce the first cross-modal transformer architecture, which extracts structured event information from both videos and text documents. We also construct and will publicly release a new benchmark of video-article pairs, consisting of 860 video-article pairs with extensive annotations for evaluating methods on this task. Our experimental results demonstrate the effectiveness of our proposed method on our new benchmark dataset. We achieve 6.0% and 5.8% absolute F-score gain on multimodal event coreference resolution and multimedia event extraction.
Search
Co-authors
- Brian Chen 1
- Xudong Lin 1
- Christopher Thomas 1
- Manling Li 1
- Shoya Yoshida 1
- show all...