Generation, Distillation and Evaluation of Motivational Interviewing-Style Reflections with a Foundational Language Model

Andrew Brown, Jiading Zhu, Mohamed Abdelwahab, Alec Dong, Cindy Wang, Jonathan Rose


Abstract
Large Foundational Language Models are capable of performing many tasks at a high level but are difficult to deploy in many applications because of their size and proprietary ownership. Many will be motivated to distill specific capabilities of foundational models into smaller models that can be owned and controlled. In the development of a therapeutic chatbot, we wish to distill a capability known as reflective listening, in which a therapist produces reflections of client speech. These reflections either restate what a client has said, or connect what was said to a relevant observation, idea or guess that encourages and guides the client to continue contemplation. In this paper, we present a method for distilling the generation of reflections from a Foundational Language Model (GPT-4) into smaller models. We first show that GPT-4, using zero-shot prompting, can generate reflections at near 100% success rate, superior to all previous methods. Using reflections generated by GPT-4, we fine-tune different sizes of the GPT-2 family. The GPT-2-small model achieves 83% success on a hold-out test set and the GPT-2 XL achieves 90% success. We also show that GPT-4 can help in the labor-intensive task of evaluating the quality of the distilled models, using it as a zero-shot classifier. Using triple-human review as a guide, the classifier achieves a Cohen-Kappa of 0.66, a substantial inter-rater reliability figure.
Anthology ID:
2024.eacl-long.75
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1241–1252
Language:
URL:
https://aclanthology.org/2024.eacl-long.75
DOI:
Bibkey:
Cite (ACL):
Andrew Brown, Jiading Zhu, Mohamed Abdelwahab, Alec Dong, Cindy Wang, and Jonathan Rose. 2024. Generation, Distillation and Evaluation of Motivational Interviewing-Style Reflections with a Foundational Language Model. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1241–1252, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Generation, Distillation and Evaluation of Motivational Interviewing-Style Reflections with a Foundational Language Model (Brown et al., EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-long.75.pdf
Video:
 https://aclanthology.org/2024.eacl-long.75.mp4