Topic and Style-aware Transformer for Multimodal Emotion Recognition

Shuwen Qiu, Nitesh Sekhar, Prateek Singhal


Abstract
Understanding emotion expressions in multimodal signals is key for machines to have a better understanding of human communication. While language, visual and acoustic modalities can provide clues from different perspectives, the visual modality is shown to make minimal contribution to the performance in the emotion recognition field due to its high dimensionality. Therefore, we first leverage the strong multimodality backbone VATT to project the visual signal to the common space with language and acoustic signals. Also, we propose content-oriented features Topic and Speaking style on top of it to approach the subjectivity issues. Experiments conducted on the benchmark dataset MOSEI show our model can outperform SOTA results and effectively incorporate visual signals and handle subjectivity issues by serving as content “normalization”.
Anthology ID:
2023.findings-acl.130
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2074–2082
Language:
URL:
https://aclanthology.org/2023.findings-acl.130
DOI:
10.18653/v1/2023.findings-acl.130
Bibkey:
Cite (ACL):
Shuwen Qiu, Nitesh Sekhar, and Prateek Singhal. 2023. Topic and Style-aware Transformer for Multimodal Emotion Recognition. In Findings of the Association for Computational Linguistics: ACL 2023, pages 2074–2082, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Topic and Style-aware Transformer for Multimodal Emotion Recognition (Qiu et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.130.pdf
Video:
 https://aclanthology.org/2023.findings-acl.130.mp4