Justine Cassell


2024

pdf bib
Evaluating the Effectiveness of Large Language Models in Establishing Conversational Grounding
Biswesh Mohapatra | Manav Nitin Kapadnis | Laurent Romary | Justine Cassell
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Conversational grounding, vital for building dependable dialog systems, involves ensuring a mutual understanding of shared information. Despite its importance, there has been limited research on this aspect of conversation in recent years, especially after the advent of Large Language Models (LLMs). Previous studies have highlighted the shortcomings of pre-trained language models in conversational grounding. However, most testing for conversational grounding capabilities involves human evaluations that are costly and time-consuming. This has led to a lack of testing across multiple models of varying sizes, a critical need given the rapid rate of new model releases. This gap in research becomes more significant considering recent advances in language models, which have led to new emergent capabilities. In this paper, we aim to evaluate the performance of LLMs in various aspects of conversational grounding and analyze why some models perform better than others. We demonstrate a direct correlation between the size of the pre-training data and conversational grounding abilities, meaning that they have independently acquired a specific form of pragmatic capabilities from larger pre-training datasets. Finally, we propose ways to enhance the capabilities of the models that lag in this aspect.

pdf bib
Conversational Grounding: Annotation and Analysis of Grounding Acts and Grounding Units
Biswesh Mohapatra | Seemab Hassan | Laurent Romary | Justine Cassell
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Successful conversations often rest on common understanding, where all parties are on the same page about the information being shared. This process, known as conversational grounding, is crucial for building trustworthy dialog systems that can accurately keep track of and recall the shared information. The proficiencies of an agent in grounding the conveyed information significantly contribute to building a reliable dialog system. Despite recent advancements in dialog systems, there exists a noticeable deficit in their grounding capabilities. Traum (Traum, 1995) provided a framework for conversational grounding introducing Grounding Acts and Grounding Units, but substantial progress, especially in the realm of Large Language Models, remains lacking. To bridge this gap, we present the annotation of two dialog corpora employing Grounding Acts, Grounding Units, and a measure of their degree of grounding. We discuss our key findings during the annotation and also provide a baseline model to test the performance of current Language Models in categorizing the grounding acts of the dialogs. Our work aims to provide a useful resource for further research in making conversations with machines better understood and more reliable in natural day-to-day collaborative dialogs.

2023

pdf bib
How About Kind of Generating Hedges using End-to-End Neural Models?
Alafate Abulimiti | Chloé Clavel | Justine Cassell
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Hedging is a strategy for softening the impact of a statement in conversation. In reducing the strength of an expression, it may help to avoid embarrassment (more technically, “face threat”) to one’s listener. For this reason, it is often found in contexts of instruction, such as tutoring. In this work, we develop a model of hedge generation based on i) fine-tuning state-of-the-art language models trained on human-human tutoring data, followed by ii) reranking to select the candidate that best matches the expected hedging strategy within a candidate pool using a hedge classifier. We apply this method to a natural peer-tutoring corpus containing a significant number of disfluencies, repetitions, and repairs. The results show that generation in this noisy environment is feasible with reranking. By conducting an error analysis for both approaches, we reveal the challenges faced by systems attempting to accomplish both social and task-oriented goals in conversation.

pdf bib
When to generate hedges in peer-tutoring interactions
Alafate Abulimiti | Chloé Clavel | Justine Cassell
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue

This paper explores the application of machine learning techniques to predict where hedging occurs in peer-tutoring interactions. The study uses a naturalistic face-to-face dataset annotated for natural language turns, conversational strategies, tutoring strategies, and nonverbal behaviors. These elements are processed into a vector representation of the previous turns, which serves as input to several machine learning models, including MLP and LSTM. The results show that embedding layers, capturing the semantic information of the previous turns, significantly improves the model’s performance. Additionally, the study provides insights into the importance of various features, such as interpersonal rapport and nonverbal behaviors, in predicting hedges by using Shapley values for feature explanation. We discover that the eye gaze of both the tutor and the tutee has a significant impact on hedge prediction. We further validate this observation through a follow-up ablation study.

2022

pdf bib
You might think about slightly revising the title”: Identifying Hedges in Peer-tutoring Interactions
Yann Raphalen | Chloé Clavel | Justine Cassell
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Hedges have an important role in the management of rapport. In peer-tutoring, they are notably used by tutors in dyads experiencing low rapport to tone down the impact of instructions and negative feedback. Pursuing the objective of building a tutoring agent that manages rapport with teenagers in order to improve learning, we used a multimodal peer-tutoring dataset to construct a computational framework for identifying hedges. We compared approaches relying on pre-trained resources with others that integrate insights from the social science literature. Our best performance involved a hybrid approach that outperforms the existing baseline while being easier to interpret. We employ a model explainability tool to explore the features that characterize hedges in peer-tutoring conversations, and we identify some novel features, and the benefits of a such a hybrid model approach.

2016

pdf bib
Socially-Aware Animated Intelligent Personal Assistant Agent
Yoichi Matsuyama | Arjun Bhardwaj | Ran Zhao | Oscar Romeo | Sushma Akoju | Justine Cassell
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf bib
Automatic Recognition of Conversational Strategies in the Service of a Socially-Aware Dialog System
Ran Zhao | Tanmay Sinha | Alan Black | Justine Cassell
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2013

pdf bib
Automatic Prediction of Friendship via Multi-model Dyadic Features
Zhou Yu | David Gerritsen | Amy Ogan | Alan Black | Justine Cassell
Proceedings of the SIGDIAL 2013 Conference

2012

pdf bib
“Love ya, jerkface”: Using Sparse Log-Linear Models to Build Positive and Impolite Relationships with Teens
William Yang Wang | Samantha Finkelstein | Amy Ogan | Alan W Black | Justine Cassell
Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2010

pdf bib
Report on the Second NLG Challenge on Generating Instructions in Virtual Environments (GIVE-2)
Alexander Koller | Kristina Striegnitz | Andrew Gargett | Donna Byron | Justine Cassell | Robert Dale | Johanna Moore | Jon Oberlander
Proceedings of the 6th International Natural Language Generation Conference

2009

pdf bib
The Software Architecture for the First Challenge on Generating Instructions in Virtual Environments
Alexander Koller | Donna Byron | Justine Cassell | Robert Dale | Johanna Moore | Jon Oberlander | Kristina Striegnitz
Proceedings of the Demonstrations Session at EACL 2009

pdf bib
Report on the First NLG Challenge on Generating Instructions in Virtual Environments (GIVE)
Donna Byron | Alexander Koller | Kristina Striegnitz | Justine Cassell | Robert Dale | Johanna Moore | Jon Oberlander
Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009)

pdf bib
Validating the web-based evaluation of NLG systems
Alexander Koller | Kristina Striegnitz | Donna Byron | Justine Cassell | Robert Dale | Sara Dalzel-Job | Johanna Moore | Jon Oberlander
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers

2008

pdf bib
Reactive Redundancy and Listener Comprehension in Direction-Giving
Rachel Baker | Alastair Gill | Justine Cassell
Proceedings of the 9th SIGdial Workshop on Discourse and Dialogue

2007

pdf bib
Proceedings of the Workshop on Embodied Language Processing
Justine Cassell | Dirk Heylen
Proceedings of the Workshop on Embodied Language Processing

pdf bib
Coordination in Conversation and Rapport
Justine Cassell | Alastair Gill | Paul Tepper
Proceedings of the Workshop on Embodied Language Processing

2006

pdf bib
Computational Measures for Language Similarity Across Time in Online Communities
David Huffaker | Joseph Jorgensen | Francisco Iacobelli | Paul Tepper | Justine Cassell
Proceedings of the Analyzing Conversations in Text and Speech

2005

pdf bib
Teaching Dialogue to Interdisciplinary Teams through Toolkits
Justine Cassell | Matthew Stone
Proceedings of the Second ACL Workshop on Effective Tools and Methodologies for Teaching NLP and CL

2004

pdf bib
Dialogue Systems that Can Handle Face-to-Face Joint Reference to Actions in Space
Justine Cassell
Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue at HLT-NAACL 2004

2003

pdf bib
Towards a Model of Face-to-Face Grounding
Yukiko Nakano | Gabe Reinstein | Tom Stocky | Justine Cassell
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

2001

pdf bib
Non-Verbal Cues for Discourse Structure
Justine Cassell | Yukiko Nakano | Timothy W. Bickmore | Candace L. Sidner | Charles Rich
Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics

2000

pdf bib
Coordination and context-dependence in the generation of embodied conversation
Justine Cassell | Matthew Stone | Hao Yan
INLG’2000 Proceedings of the First International Conference on Natural Language Generation

1997

pdf bib
Semantic and Discourse Information for Text-to-Speech Intonation
Laurie Hiyakumoto | Scott Prevost | Justine Cassell
Concept to Speech Generation Systems