2021
pdf
bib
abs
Detecting Cognitive Distortions from Patient-Therapist Interactions
Sagarika Shreevastava
|
Peter Foltz
Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access
An important part of Cognitive Behavioral Therapy (CBT) is to recognize and restructure certain negative thinking patterns that are also known as cognitive distortions. The aim of this project is to detect these distortions using natural language processing. We compare and contrast different types of linguistic features as well as different classification algorithms and explore the limitations of applying these techniques on a small dataset. We find that pre-trained Sentence-BERT embeddings to train an SVM classifier yields the best results with an F1-score of 0.79. Lastly, we discuss how this work provides insights into the types of linguistic features that are inherent in cognitive distortions.
pdf
bib
abs
Safeguarding against spurious AI-based predictions: The case of automated verbal memory assessment
Chelsea Chandler
|
Peter Foltz
|
Alex Cohen
|
Terje Holmlund
|
Brita Elvevåg
Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access
A growing amount of psychiatric research incorporates machine learning and natural language processing methods, however findings have yet to be translated into actual clinical decision support systems. Many of these studies are based on relatively small datasets in homogeneous populations, which has the associated risk that the models may not perform adequately on new data in real clinical practice. The nature of serious mental illness is that it is hard to define, hard to capture, and requires frequent monitoring, which leads to imperfect data where attribute and class noise are common. With the goal of an effective AI-mediated clinical decision support system, there must be computational safeguards placed on the models used in order to avoid spurious predictions and thus allow humans to review data in the settings where models are unstable or bound not to generalize. This paper describes two approaches to implementing safeguards: (1) the determination of cases in which models are unstable by means of attribute and class based outlier detection and (2) finding the extent to which models show inductive bias. These safeguards are illustrated in the automated scoring of a story recall task via natural language processing methods. With the integration of human-in-the-loop machine learning in the clinical implementation process, incorporating safeguards such as these into the models will offer patients increased protection from spurious predictions.
2020
pdf
bib
abs
Multiple Instance Learning for Content Feedback Localization without Annotation
Scott Hellman
|
William Murray
|
Adam Wiemerslage
|
Mark Rosenstein
|
Peter Foltz
|
Lee Becker
|
Marcia Derr
Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications
Automated Essay Scoring (AES) can be used to automatically generate holistic scores with reliability comparable to human scoring. In addition, AES systems can provide formative feedback to learners, typically at the essay level. In contrast, we are interested in providing feedback specialized to the content of the essay, and specifically for the content areas required by the rubric. A key objective is that the feedback should be localized alongside the relevant essay text. An important step in this process is determining where in the essay the rubric designated points and topics are discussed. A natural approach to this task is to train a classifier using manually annotated data; however, collecting such data is extremely resource intensive. Instead, we propose a method to predict these annotation spans without requiring any labeled annotation data. Our approach is to consider AES as a Multiple Instance Learning (MIL) task. We show that such models can both predict content scores and localize content by leveraging their sentence-level score predictions. This capability arises despite never having access to annotation training data. Implications are discussed for improving formative feedback and explainable AES models.
2019
pdf
bib
abs
Overcoming the bottleneck in traditional assessments of verbal memory: Modeling human ratings and classifying clinical group membership
Chelsea Chandler
|
Peter W. Foltz
|
Jian Cheng
|
Jared C. Bernstein
|
Elizabeth P. Rosenfeld
|
Alex S. Cohen
|
Terje B. Holmlund
|
Brita Elvevåg
Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology
Verbal memory is affected by numerous clinical conditions and most neuropsychological and clinical examinations evaluate it. However, a bottleneck exists in such endeavors because traditional methods require expert human review, and usually only a couple of test versions exist, thus limiting the frequency of administration and clinical applications. The present study overcomes this bottleneck by automating the administration, transcription, analysis and scoring of story recall. A large group of healthy participants (n = 120) and patients with mental illness (n = 105) interacted with a mobile application that administered a wide range of assessments, including verbal memory. The resulting speech generated by participants when retelling stories from the memory task was transcribed using automatic speech recognition tools, which was compared with human transcriptions (overall word error rate = 21%). An assortment of surface-level and semantic language-based features were extracted from the verbal recalls. A final set of three features were used to both predict expert human ratings with a ridge regression model (r = 0.88) and to differentiate patients from healthy individuals with an ensemble of logistic regression classifiers (accuracy = 76%). This is the first ‘outside of the laboratory’ study to showcase the viability of the complete pipeline of automated assessment of verbal memory in naturalistic settings.
2015
pdf
bib
Identifying Patterns For Short Answer Scoring Using Graph-based Lexico-Semantic Text Matching
Lakshmi Ramachandran
|
Jian Cheng
|
Peter Foltz
Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications
pdf
bib
Generating Reference Texts for Short Answer Scoring Using Graph-based Summarization
Lakshmi Ramachandran
|
Peter Foltz
Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications
pdf
bib
Practical issues in developing semantic frameworks for the analysis of verbal fluency data: A Norwegian data case study
Mark Rosenstein
|
Peter Foltz
|
Anja Vaskinn
|
Brita Elvevåg
Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality
2004
pdf
bib
Automated Team Discourse Annotation and Performance Prediction Using LSA
Melanie J. Martin
|
Peter W. Foltz
Proceedings of HLT-NAACL 2004: Short Papers