Jiaxin Ye
2024
emotion2vec: Self-Supervised Pre-Training for Speech Emotion Representation
Ziyang Ma
|
Zhisheng Zheng
|
Jiaxin Ye
|
Jinchao Li
|
Zhifu Gao
|
ShiLiang Zhang
|
Xie Chen
Findings of the Association for Computational Linguistics: ACL 2024
We propose emotion2vec, a universal speech emotion representation model. emotion2vec is pre-trained on open-source unlabeled emotion data through self-supervised online distillation, combining utterance-level loss and frame-level loss during pre-training. emotion2vec outperforms state-of-the-art pre-trained universal models and emotion specialist models by only training linear layers for the speech emotion recognition task on the mainstream IEMOCAP dataset. In addition, emotion2vec shows consistent improvements among 10 different languages of speech emotion recognition datasets. emotion2vec also shows excellent results on other emotion tasks, such as song emotion recognition, emotion prediction in conversation, and sentiment analysis. Comparison experiments, ablation experiments, and visualization comprehensively demonstrate the universal capability of the proposed emotion2vec. To the best of our knowledge, emotion2vec is the first universal representation model in various emotion-related tasks, filling a gap in the field.
Search
Co-authors
- Ziyang Ma 1
- Zhisheng Zheng 1
- Jinchao Li 1
- Zhifu Gao 1
- ShiLiang Zhang 1
- show all...
- Xie Chen 1