Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition

Jean-Benoit Delbrouck, Noé Tits, Stéphane Dupont


Abstract
This paper aims to bring a new lightweight yet powerful solution for the task of Emotion Recognition and Sentiment Analysis. Our motivation is to propose two architectures based on Transformers and modulation that combine the linguistic and acoustic inputs from a wide range of datasets to challenge, and sometimes surpass, the state-of-the-art in the field. To demonstrate the efficiency of our models, we carefully evaluate their performances on the IEMOCAP, MOSI, MOSEI and MELD dataset. The experiments can be directly replicated and the code is fully open for future researches.
Anthology ID:
2020.nlpbt-1.1
Volume:
Proceedings of the First International Workshop on Natural Language Processing Beyond Text
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | nlpbt
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–10
Language:
URL:
https://aclanthology.org/2020.nlpbt-1.1
DOI:
10.18653/v1/2020.nlpbt-1.1
Bibkey:
Cite (ACL):
Jean-Benoit Delbrouck, Noé Tits, and Stéphane Dupont. 2020. Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition. In Proceedings of the First International Workshop on Natural Language Processing Beyond Text, pages 1–10, Online. Association for Computational Linguistics.
Cite (Informal):
Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition (Delbrouck et al., nlpbt 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.nlpbt-1.1.pdf
Video:
 https://slideslive.com/38939779
Code
 jbdel/modulated_fusion_transformer
Data
IEMOCAPMELDMultimodal Opinionlevel Sentiment Intensity