Distilling Empathy from Large Language Models

Henry J. Xie, Jinghan Zhang, Xinhao Zhang, Kunpeng Liu


Abstract
The distillation of knowledge from Large Language Models (LLMs) into Smaller Language Models (SLMs), preserving the capabilities and performance of LLMs while reducing model size, has played a key role in the proliferation of LLMs. Because SLMs are considerably smaller than LLMs, they are often utilized in domains where human interaction is frequent but resources are highly constrained, e.g., smart phones. Therefore, it is crucial to ensure that empathy, a fundamental aspect of positive human interactions, already instilled into LLMs, is retained by SLMs after distillation. In this paper, we develop a comprehensive approach for effective empathy distillation from LLMs into SLMs. Our approach features a two-step fine-tuning process that fully leverages datasets of empathetic dialogue responses distilled from LLMs. We explore several distillation methods beyond basic direct prompting and propose four unique sets of prompts for targeted empathy improvement to significantly enhance the empathy distillation process. Our evaluations demonstrate that SLMs fine-tuned through the two-step fine-tuning process with distillation datasets enhanced by the targeted empathy improvement prompts significantly outperform the base SLM at generating empathetic responses with a win rate of 90+%. Our targeted empathy improvement prompts substantially outperform the basic direct prompting with a 10+% improvement in win rate.
Anthology ID:
2025.sigdial-1.28
Volume:
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Month:
August
Year:
2025
Address:
Avignon, France
Editors:
Frédéric Béchet, Fabrice Lefèvre, Nicholas Asher, Seokhwan Kim, Teva Merlin
Venue:
SIGDIAL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
343–354
Language:
URL:
https://aclanthology.org/2025.sigdial-1.28/
DOI:
Bibkey:
Cite (ACL):
Henry J. Xie, Jinghan Zhang, Xinhao Zhang, and Kunpeng Liu. 2025. Distilling Empathy from Large Language Models. In Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 343–354, Avignon, France. Association for Computational Linguistics.
Cite (Informal):
Distilling Empathy from Large Language Models (Xie et al., SIGDIAL 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.sigdial-1.28.pdf