EmpathyEar: An Open-source Avatar Multimodal Empathetic Chatbot

Hao Fei, Han Zhang, Bin Wang, Lizi Liao, Qian Liu, Erik Cambria


Abstract
This paper introduces EmpathyEar, a pioneering open-source, avatar-based multimodal empathetic chatbot, to fill the gap in traditional text-only empathetic response generation (ERG) systems. Leveraging the advancements of a large language model, combined with multimodal encoders and generators, EmpathyEar supports user inputs in any combination of text, sound, and vision, and produces multimodal empathetic responses, offering users, not just textual responses but also digital avatars with talking faces and synchronized speeches. A series of emotion-aware instruction-tuning is performed for comprehensive emotional understanding and generation capabilities. In this way, EmpathyEar provides users with responses that achieve a deeper emotional resonance, closely emulating human-like empathy. The system paves the way for the next emotional intelligence, for which we open-source the code for public access.
Anthology ID:
2024.acl-demos.7
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Yixin Cao, Yang Feng, Deyi Xiong
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
61–71
Language:
URL:
https://aclanthology.org/2024.acl-demos.7
DOI:
10.18653/v1/2024.acl-demos.7
Bibkey:
Cite (ACL):
Hao Fei, Han Zhang, Bin Wang, Lizi Liao, Qian Liu, and Erik Cambria. 2024. EmpathyEar: An Open-source Avatar Multimodal Empathetic Chatbot. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), pages 61–71, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
EmpathyEar: An Open-source Avatar Multimodal Empathetic Chatbot (Fei et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-demos.7.pdf