MMDAG: Multimodal Directed Acyclic Graph Network for Emotion Recognition in Conversation

Shuo Xu, Yuxiang Jia, Changyong Niu, Hongying Zan


Abstract
Emotion recognition in conversation is important for an empathetic dialogue system to understand the user’s emotion and then generate appropriate emotional responses. However, most previous researches focus on modeling conversational contexts primarily based on the textual modality or simply utilizing multimodal information through feature concatenation. In order to exploit multimodal information and contextual information more effectively, we propose a multimodal directed acyclic graph (MMDAG) network by injecting information flows inside modality and across modalities into the DAG architecture. Experiments on IEMOCAP and MELD show that our model outperforms other state-of-the-art models. Comparative studies validate the effectiveness of the proposed modality fusion method.
Anthology ID:
2022.lrec-1.733
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
6802–6807
Language:
URL:
https://aclanthology.org/2022.lrec-1.733
DOI:
Bibkey:
Cite (ACL):
Shuo Xu, Yuxiang Jia, Changyong Niu, and Hongying Zan. 2022. MMDAG: Multimodal Directed Acyclic Graph Network for Emotion Recognition in Conversation. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 6802–6807, Marseille, France. European Language Resources Association.
Cite (Informal):
MMDAG: Multimodal Directed Acyclic Graph Network for Emotion Recognition in Conversation (Xu et al., LREC 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.lrec-1.733.pdf
Data
IEMOCAPMELD