Contextual Augmentation of Pretrained Language Models for Emotion Recognition in Conversations

Jonggu Kim, Hyeonmok Ko, Seoha Song, Saebom Jang, Jiyeon Hong


Abstract
Since language model pretraining to learn contextualized word representations has been proposed, pretrained language models have made success in many natural language processing tasks. That is because it is helpful to use individual contextualized representations of self-attention layers as to initialize parameters for downstream tasks. Yet, unfortunately, use of pretrained language models for emotion recognition in conversations has not been studied enough. We firstly use ELECTRA which is a state-of-the-art pretrained language model and validate the performance on emotion recognition in conversations. Furthermore, we propose contextual augmentation of pretrained language models for emotion recognition in conversations, which is to consider not only previous utterances, but also conversation-related information such as speakers, speech acts and topics. We classify information based on what the information is related to, and propose position of words corresponding to the information in the entire input sequence. To validate the proposed method, we conduct experiments on the DailyDialog dataset which contains abundant annotated information of conversations. The experiments show that the proposed method achieves state-of-the-art F1 scores on the dataset and significantly improves the performance.
Anthology ID:
2020.peoples-1.7
Volume:
Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Malvina Nissim, Viviana Patti, Barbara Plank, Esin Durmus
Venue:
PEOPLES
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
64–73
Language:
URL:
https://aclanthology.org/2020.peoples-1.7
DOI:
Bibkey:
Cite (ACL):
Jonggu Kim, Hyeonmok Ko, Seoha Song, Saebom Jang, and Jiyeon Hong. 2020. Contextual Augmentation of Pretrained Language Models for Emotion Recognition in Conversations. In Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media, pages 64–73, Barcelona, Spain (Online). Association for Computational Linguistics.
Cite (Informal):
Contextual Augmentation of Pretrained Language Models for Emotion Recognition in Conversations (Kim et al., PEOPLES 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.peoples-1.7.pdf
Data
DailyDialog