Stephanie Milani
2024
PATIENT-π: Using Large Language Models to Simulate Patients for Training Mental Health Professionals
Ruiyi Wang
|
Stephanie Milani
|
Jamie C. Chiu
|
Jiayin Zhi
|
Shaun M. Eack
|
Travis Labrum
|
Samuel M Murphy
|
Nev Jones
|
Kate V Hardy
|
Hong Shen
|
Fei Fang
|
Zhiyu Chen
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Mental illness remains one of the most critical public health issues. Despite its importance, many mental health professionals highlight a disconnect between their training and actual real-world patient practice. To help bridge this gap, we propose PATIENT-π, a novel patient simulation framework for cognitive behavior therapy (CBT) training. To build PATIENT-π, we construct diverse patient cognitive models based on CBT principles and use large language models (LLMs) programmed with these cognitive models to act as a simulated therapy patient. We propose an interactive training scheme, PATIENT-π-TRAINER, for mental health trainees to practice a key skill in CBT β formulating the cognitive model of the patient β through role-playing a therapy session with PATIENT-π. To evaluate PATIENT-π, we conducted a comprehensive user study of 13 mental health trainees and 20 experts. The results demonstrate that practice using PATIENT-π-TRAINER enhances the perceived skill acquisition and confidence of the trainees beyond existing forms of training such as textbooks, videos, and role-play with non-patients. Based on the expertsβ perceptions, PATIENT-π is perceived to be closer to real patient interactions than GPT-4, and PATIENT-π-TRAINER holds strong promise to improve trainee competencies. Our code and data are released at https://github.com/ruiyiw/patient-psi.
Search
Co-authors
- Ruiyi Wang 1
- Jamie C. Chiu 1
- Jiayin Zhi 1
- Shaun M. Eack 1
- Travis Labrum 1
- show all...