COIN: Conversational Interactive Networks for Emotion Recognition in Conversation

Haidong Zhang, Yekun Chai


Abstract
Emotion recognition in conversation has received considerable attention recently because of its practical industrial applications. Existing methods tend to overlook the immediate mutual interaction between different speakers in the speaker-utterance level, or apply single speaker-agnostic RNN for utterances from different speakers. We propose COIN, a conversational interactive model to mitigate this problem by applying state mutual interaction within history contexts. In addition, we introduce a stacked global interaction module to capture the contextual and inter-dependency representation in a hierarchical manner. To improve the robustness and generalization during training, we generate adversarial examples by applying the minor perturbations on multimodal feature inputs, unveiling the benefits of adversarial examples for emotion detection. The proposed model empirically achieves the current state-of-the-art results on the IEMOCAP benchmark dataset.
Anthology ID:
2021.maiworkshop-1.3
Volume:
Proceedings of the Third Workshop on Multimodal Artificial Intelligence
Month:
June
Year:
2021
Address:
Mexico City, Mexico
Venue:
maiworkshop
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12–18
Language:
URL:
https://aclanthology.org/2021.maiworkshop-1.3
DOI:
10.18653/v1/2021.maiworkshop-1.3
Bibkey:
Cite (ACL):
Haidong Zhang and Yekun Chai. 2021. COIN: Conversational Interactive Networks for Emotion Recognition in Conversation. In Proceedings of the Third Workshop on Multimodal Artificial Intelligence, pages 12–18, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
COIN: Conversational Interactive Networks for Emotion Recognition in Conversation (Zhang & Chai, maiworkshop 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.maiworkshop-1.3.pdf
Data
IEMOCAP