ERD: A Framework for Improving LLM Reasoning for Cognitive Distortion Classification

Sehee Lim, Yejin Kim, Chi-Hyun Choi, Jy-yong Sohn, Byung-Hoon Kim


Abstract
Improving the accessibility of psychotherapy with the aid of Large Language Models (LLMs) is garnering a significant attention in recent years. Recognizing cognitive distortions from the interviewee’s utterances can be an essential part of psychotherapy, especially for cognitive behavioral therapy. In this paper, we propose ERD, which improves LLM-based cognitive distortion classification performance with the aid of additional modules of (1) extracting the parts related to cognitive distortion, and (2) debating the reasoning steps by multiple agents. Our experimental results on a public dataset show that ERD improves the multi-class F1 score as well as binary specificity score. Regarding the latter score, it turns out that our method is effective in debiasing the baseline method which has high false positive rate, especially when the summary of multi-agent debate is provided to LLMs.
Anthology ID:
2024.clinicalnlp-1.25
Volume:
Proceedings of the 6th Clinical Natural Language Processing Workshop
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Tristan Naumann, Asma Ben Abacha, Steven Bethard, Kirk Roberts, Danielle Bitterman
Venues:
ClinicalNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
292–300
Language:
URL:
https://aclanthology.org/2024.clinicalnlp-1.25
DOI:
10.18653/v1/2024.clinicalnlp-1.25
Bibkey:
Cite (ACL):
Sehee Lim, Yejin Kim, Chi-Hyun Choi, Jy-yong Sohn, and Byung-Hoon Kim. 2024. ERD: A Framework for Improving LLM Reasoning for Cognitive Distortion Classification. In Proceedings of the 6th Clinical Natural Language Processing Workshop, pages 292–300, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
ERD: A Framework for Improving LLM Reasoning for Cognitive Distortion Classification (Lim et al., ClinicalNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.clinicalnlp-1.25.pdf