Leili Tavabi
2021
Speaker Turn Modeling for Dialogue Act Classification
Zihao He
|
Leili Tavabi
|
Kristina Lerman
|
Mohammad Soleymani
Findings of the Association for Computational Linguistics: EMNLP 2021
Dialogue Act (DA) classification is the task of classifying utterances with respect to the function they serve in a dialogue. Existing approaches to DA classification model utterances without incorporating the turn changes among speakers throughout the dialogue, therefore treating it no different than non-interactive written text. In this paper, we propose to integrate the turn changes in conversations among speakers when modeling DAs. Specifically, we learn conversation-invariant speaker turn embeddings to represent the speaker turns in a conversation; the learned speaker turn embeddings are then merged with the utterance embeddings for the downstream task of DA classification. With this simple yet effective mechanism, our model is able to capture the semantics from the dialogue content while accounting for different speaker turns in a conversation. Validation on three benchmark public datasets demonstrates superior performance of our model.
Analysis of Behavior Classification in Motivational Interviewing
Leili Tavabi
|
Trang Tran
|
Kalin Stefanov
|
Brian Borsari
|
Joshua Woolley
|
Stefan Scherer
|
Mohammad Soleymani
Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access
Analysis of client and therapist behavior in counseling sessions can provide helpful insights for assessing the quality of the session and consequently, the client’s behavioral outcome. In this paper, we study the automatic classification of standardized behavior codes (annotations) used for assessment of psychotherapy sessions in Motivational Interviewing (MI). We develop models and examine the classification of client behaviors throughout MI sessions, comparing the performance by models trained on large pretrained embeddings (RoBERTa) versus interpretable and expert-selected features (LIWC). Our best performing model using the pretrained RoBERTa embeddings beats the baseline model, achieving an F1 score of 0.66 in the subject-independent 3-class classification. Through statistical analysis on the classification results, we identify prominent LIWC features that may not have been captured by the model using pretrained embeddings. Although classification using LIWC features underperforms RoBERTa, our findings motivate the future direction of incorporating auxiliary tasks in the classification of MI codes.
Search
Co-authors
- Mohammad Soleymani 2
- Zihao He 1
- Kristina Lerman 1
- Trang Tran 1
- Kalin Stefanov 1
- show all...