Large Language Models Know What To Say But Not When To Speak

Muhammad Umair, Vasanth Sarathy, Jan Ruiter


Abstract
Turn-taking is a fundamental mechanism in human communication that ensures smooth and coherent verbal interactions. Recent advances in Large Language Models (LLMs) have motivated their use in improving the turn-taking capabilities of Spoken Dialogue Systems (SDS), such as their ability to respond at appropriate times. However, existing models often struggle to predict opportunities for speaking — called Transition Relevance Places (TRPs) — in natural, unscripted conversations, focusing only on turn-final TRPs and not within-turn TRPs. To address these limitations, we introduce a novel dataset of participant-labeled within-turn TRPs and use it to evaluate the performance of state-of-the-art LLMs in predicting opportunities for speaking. Our experiments reveal the current limitations of LLMs in modeling unscripted spoken interactions, highlighting areas for improvement and paving the way for more naturalistic dialogue systems.
Anthology ID:
2024.findings-emnlp.909
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15503–15514
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.909
DOI:
Bibkey:
Cite (ACL):
Muhammad Umair, Vasanth Sarathy, and Jan Ruiter. 2024. Large Language Models Know What To Say But Not When To Speak. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 15503–15514, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Large Language Models Know What To Say But Not When To Speak (Umair et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.909.pdf