EHDChat: A Knowledge-Grounded, Empathy-Enhanced Language Model for Healthcare Interactions

Shenghan Wu, Wynne Hsu, Mong Li Lee


Abstract
Large Language Models (LLMs) excel at a range of tasks but often struggle with issues like hallucination and inadequate empathy support. To address hallucinations, we ground our dialogues in medical knowledge sourced from external repositories such as Disease Ontology and DrugBank. To improve empathy support, we develop the Empathetic Healthcare Dialogues dataset, which utilizes multiple dialogue strategies in each response. This dataset is then used to fine-tune an LLM, and we introduce a lightweight, adaptable method called Strategy Combination Guidance to enhance the emotional support capabilities of the fine-tuned model, named EHDChat. Our evaluations show that EHDChat significantly outperforms existing models in providing emotional support and medical accuracy, demonstrating the effectiveness of our approach in enhancing empathetic and informed AI interactions in healthcare.
Anthology ID:
2024.sicon-1.10
Volume:
Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
James Hale, Kushal Chawla, Muskan Garg
Venue:
SICon
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
141–151
Language:
URL:
https://aclanthology.org/2024.sicon-1.10
DOI:
Bibkey:
Cite (ACL):
Shenghan Wu, Wynne Hsu, and Mong Li Lee. 2024. EHDChat: A Knowledge-Grounded, Empathy-Enhanced Language Model for Healthcare Interactions. In Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024), pages 141–151, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
EHDChat: A Knowledge-Grounded, Empathy-Enhanced Language Model for Healthcare Interactions (Wu et al., SICon 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.sicon-1.10.pdf