Wasifur Rahman
2020
Integrating Multimodal Information in Large Pretrained Transformers
Wasifur Rahman
|
Md Kamrul Hasan
|
Sangwu Lee
|
AmirAli Bagher Zadeh
|
Chengfeng Mao
|
Louis-Philippe Morency
|
Ehsan Hoque
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recent Transformer-based contextual word representations, including BERT and XLNet, have shown state-of-the-art performance in multiple disciplines within NLP. Fine-tuning the trained contextual models on task-specific datasets has been the key to achieving superior performance downstream. While fine-tuning these pre-trained models is straightforward for lexical applications (applications with only language modality), it is not trivial for multimodal language (a growing area in NLP focused on modeling face-to-face communication). More specifically, this is due to the fact that pre-trained models don’t have the necessary components to accept two extra modalities of vision and acoustic. In this paper, we proposed an attachment to BERT and XLNet called Multimodal Adaptation Gate (MAG). MAG allows BERT and XLNet to accept multimodal nonverbal data during fine-tuning. It does so by generating a shift to internal representation of BERT and XLNet; a shift that is conditioned on the visual and acoustic modalities. In our experiments, we study the commonly used CMU-MOSI and CMU-MOSEI datasets for multimodal sentiment analysis. Fine-tuning MAG-BERT and MAG-XLNet significantly boosts the sentiment analysis performance over previous baselines as well as language-only fine-tuning of BERT and XLNet. On the CMU-MOSI dataset, MAG-XLNet achieves human-level multimodal sentiment analysis performance for the first time in the NLP community.
2019
UR-FUNNY: A Multimodal Language Dataset for Understanding Humor
Md Kamrul Hasan
|
Wasifur Rahman
|
AmirAli Bagher Zadeh
|
Jianyuan Zhong
|
Md Iftekhar Tanveer
|
Louis-Philippe Morency
|
Mohammed (Ehsan) Hoque
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Humor is a unique and creative communicative behavior often displayed during social interactions. It is produced in a multimodal manner, through the usage of words (text), gestures (visual) and prosodic cues (acoustic). Understanding humor from these three modalities falls within boundaries of multimodal language; a recent research trend in natural language processing that models natural language as it happens in face-to-face communication. Although humor detection is an established research area in NLP, in a multimodal context it has been understudied. This paper presents a diverse multimodal dataset, called UR-FUNNY, to open the door to understanding multimodal language used in expressing humor. The dataset and accompanying studies, present a framework in multimodal humor detection for the natural language processing community. UR-FUNNY is publicly available for research.
Search
Co-authors
- Md Kamrul Hasan 2
- AmirAli Bagher Zadeh 2
- Louis-Philippe Morency 2
- Sangwu Lee 1
- Chengfeng Mao 1
- show all...