Evaluating the Effectiveness of Large Language Models in Establishing Conversational Grounding

Biswesh Mohapatra, Manav Kapadnis, Laurent Romary, Justine Cassell


Abstract
Conversational grounding, vital for building dependable dialog systems, involves ensuring a mutual understanding of shared information. Despite its importance, there has been limited research on this aspect of conversation in recent years, especially after the advent of Large Language Models (LLMs). Previous studies have highlighted the shortcomings of pre-trained language models in conversational grounding. However, most testing for conversational grounding capabilities involves human evaluations that are costly and time-consuming. This has led to a lack of testing across multiple models of varying sizes, a critical need given the rapid rate of new model releases. This gap in research becomes more significant considering recent advances in language models, which have led to new emergent capabilities. In this paper, we aim to evaluate the performance of LLMs in various aspects of conversational grounding and analyze why some models perform better than others. We demonstrate a direct correlation between the size of the pre-training data and conversational grounding abilities, meaning that they have independently acquired a specific form of pragmatic capabilities from larger pre-training datasets. Finally, we propose ways to enhance the capabilities of the models that lag in this aspect.
Anthology ID:
2024.emnlp-main.545
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9767–9781
Language:
URL:
https://aclanthology.org/2024.emnlp-main.545
DOI:
Bibkey:
Cite (ACL):
Biswesh Mohapatra, Manav Kapadnis, Laurent Romary, and Justine Cassell. 2024. Evaluating the Effectiveness of Large Language Models in Establishing Conversational Grounding. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 9767–9781, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Evaluating the Effectiveness of Large Language Models in Establishing Conversational Grounding (Mohapatra et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.545.pdf