Ben Cohen
2024
Assessing Motivational Interviewing Sessions with AI-Generated Patient Simulations
Stav Yosef
|
Moreah Zisquit
|
Ben Cohen
|
Anat Klomek Brunstein
|
Kfir Bar
|
Doron Friedman
Proceedings of the 9th Workshop on Computational Linguistics and Clinical Psychology (CLPsych 2024)
There is growing interest in utilizing large language models (LLMs) in the field of mental health, and this goes as far as suggesting automated LLM-based therapists. Evaluating such generative models in therapy sessions is essential, yet remains an ongoing and complex challenge. We suggest a novel approach: an LLMbased digital patient platform which generates digital patients that can engage in a text-based conversation with either automated or human therapists. Moreover, we show that LLMs can be used to rate the quality of such sessions by completing questionnaires originally designed for human patients. We demonstrate that the ratings are both statistically reliable and valid, indicating that they are consistent and capable of distinguishing among three levels of therapist expertise. In the present study, we focus on motivational interviewing, but we suggest that this platform can be adapted to facilitate other types of therapies. We plan to publish the digital patient platform and make it available to the research community, with the hope of contributing to the standardization of evaluating automated therapists.
Motivational Interviewing Transcripts Annotated with Global Scores
Ben Cohen
|
Moreah Zisquit
|
Stav Yosef
|
Doron Friedman
|
Kfir Bar
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Motivational interviewing (MI) is a counseling approach that aims to increase intrinsic motivation and commitment to change. Despite its effectiveness in various disorders such as addiction, weight loss, and smoking cessation, publicly available annotated MI datasets are scarce, limiting the development and evaluation of MI language generation models. We present MI-TAGS, a new annotated dataset of MI therapy sessions written in English collected from video recordings available on public sources. The dataset includes 242 MI demonstration transcripts annotated with the MI Treatment Integrity (MITI) 4.2 therapist behavioral codes and global scores, and Client Language EAsy Rating (CLEAR) 1.0 tags for client speech. In this paper we describe the process of data collection, transcription, and annotation, and provide an analysis of the new dataset. Additionally, we explore the potential use of the dataset for training language models to perform several MITI classification tasks; our results suggest that models may be able to automatically provide utterance-level annotation as well as global scores, with performance comparable to human annotators.
Search