%0 Conference Proceedings %T Multi-modal Summarization for Asynchronous Collection of Text, Image, Audio and Video %A Li, Haoran %A Zhu, Junnan %A Ma, Cong %A Zhang, Jiajun %A Zong, Chengqing %Y Palmer, Martha %Y Hwa, Rebecca %Y Riedel, Sebastian %S Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing %D 2017 %8 September %I Association for Computational Linguistics %C Copenhagen, Denmark %F li-etal-2017-multi %X The rapid increase of the multimedia data over the Internet necessitates multi-modal summarization from collections of text, image, audio and video. In this work, we propose an extractive Multi-modal Summarization (MMS) method which can automatically generate a textual summary given a set of documents, images, audios and videos related to a specific topic. The key idea is to bridge the semantic gaps between multi-modal contents. For audio information, we design an approach to selectively use its transcription. For vision information, we learn joint representations of texts and images using a neural network. Finally, all the multi-modal aspects are considered to generate the textural summary by maximizing the salience, non-redundancy, readability and coverage through budgeted optimization of submodular functions. We further introduce an MMS corpus in English and Chinese. The experimental results on this dataset demonstrate that our method outperforms other competitive baseline methods. %R 10.18653/v1/D17-1114 %U https://aclanthology.org/D17-1114 %U https://doi.org/10.18653/v1/D17-1114 %P 1092-1102