Harnessing the Power of Large Language Models for Empathetic Response Generation: Empirical Investigations and Improvements

Yushan Qian, Weinan Zhang, Ting Liu


Abstract
Empathetic dialogue is an indispensable part of building harmonious social relationships and contributes to the development of a helpful AI. Previous approaches are mainly based on fine small-scale language models. With the advent of ChatGPT, the application effect of large language models (LLMs) in this field has attracted great attention. This work empirically investigates the performance of LLMs in generating empathetic responses and proposes three improvement methods of semantically similar in-context learning, two-stage interactive generation, and combination with the knowledge base. Extensive experiments show that LLMs can significantly benefit from our proposed methods and is able to achieve state-of-the-art performance in both automatic and human evaluations. Additionally, we explore the possibility of GPT-4 simulating human evaluators.
Anthology ID:
2023.findings-emnlp.433
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6516–6528
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.433
DOI:
10.18653/v1/2023.findings-emnlp.433
Bibkey:
Cite (ACL):
Yushan Qian, Weinan Zhang, and Ting Liu. 2023. Harnessing the Power of Large Language Models for Empathetic Response Generation: Empirical Investigations and Improvements. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6516–6528, Singapore. Association for Computational Linguistics.
Cite (Informal):
Harnessing the Power of Large Language Models for Empathetic Response Generation: Empirical Investigations and Improvements (Qian et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.433.pdf