EmotionQueen: A Benchmark for Evaluating Empathy of Large Language Models

Yuyan Chen, Songzhou Yan, Sijia Liu, Yueze Li, Yanghua Xiao


Abstract
Emotional intelligence in large language models (LLMs) is of great importance in Natural Language Processing. However, the previous research mainly focus on basic sentiment analysis tasks, such as emotion recognition, which is not enough to evaluate LLMs’ overall emotional intelligence. Therefore, this paper presents a novel framework named EmotionQueen for evaluating the emotional intelligence of LLMs. The framework includes four distinctive tasks: Key Event Recognition, Mixed Event Recognition, Implicit Emotional Recognition, and Intention Recognition. LLMs are requested to recognize important event or implicit emotions and generate empathetic response.We also design two metrics to evaluate LLMs’ capabilities in recognition and response for emotion-related statements. Experiments yield significant conclusions about LLMs’ capabilities and limitations in emotion intelligence.
Anthology ID:
2024.findings-acl.128
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2149–2176
Language:
URL:
https://aclanthology.org/2024.findings-acl.128
DOI:
Bibkey:
Cite (ACL):
Yuyan Chen, Songzhou Yan, Sijia Liu, Yueze Li, and Yanghua Xiao. 2024. EmotionQueen: A Benchmark for Evaluating Empathy of Large Language Models. In Findings of the Association for Computational Linguistics ACL 2024, pages 2149–2176, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
EmotionQueen: A Benchmark for Evaluating Empathy of Large Language Models (Chen et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.128.pdf