MAST: Multimodal Abstractive Summarization with Trimodal Hierarchical Attention

Aman Khullar, Udit Arora


Abstract
This paper presents MAST, a new model for Multimodal Abstractive Text Summarization that utilizes information from all three modalities – text, audio and video – in a multimodal video. Prior work on multimodal abstractive text summarization only utilized information from the text and video modalities. We examine the usefulness and challenges of deriving information from the audio modality and present a sequence-to-sequence trimodal hierarchical attention-based model that overcomes these challenges by letting the model pay more attention to the text modality. MAST outperforms the current state of the art model (video-text) by 2.51 points in terms of Content F1 score and 1.00 points in terms of Rouge-L score on the How2 dataset for multimodal language understanding.
Anthology ID:
2020.nlpbt-1.7
Volume:
Proceedings of the First International Workshop on Natural Language Processing Beyond Text
Month:
November
Year:
2020
Address:
Online
Editors:
Giuseppe Castellucci, Simone Filice, Soujanya Poria, Erik Cambria, Lucia Specia
Venue:
nlpbt
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
60–69
Language:
URL:
https://aclanthology.org/2020.nlpbt-1.7
DOI:
10.18653/v1/2020.nlpbt-1.7
Bibkey:
Cite (ACL):
Aman Khullar and Udit Arora. 2020. MAST: Multimodal Abstractive Summarization with Trimodal Hierarchical Attention. In Proceedings of the First International Workshop on Natural Language Processing Beyond Text, pages 60–69, Online. Association for Computational Linguistics.
Cite (Informal):
MAST: Multimodal Abstractive Summarization with Trimodal Hierarchical Attention (Khullar & Arora, nlpbt 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.nlpbt-1.7.pdf
Video:
 https://slideslive.com/38939781
Code
 amankhullar/mast
Data
How2