A Few-shot Approach to Task-oriented Dialogue Enhanced with Chitchat

Armand Stricker, Patrick Paroubek


Abstract
Large language models (LLMs) tuned for chat have recently been adopted for few-shot end-to-end task-oriented dialogue (TOD), with some success. To further assess this method, we conduct experiments on two, more complex, task-oriented benchmarks that integrate elements of chitchat into the conversation. We enhance a few-shot baseline by adding zero-shot chitchat detection and implementing function calling for dialogue state tracking (DST). We focus on this step in the task-oriented pipeline as it comes first, and errors due to added chitchat at this stage have the most impact on end-to-end performance. We find that this prompting method shows increased resilience to mixed-mode inputs and our enhanced pipeline allows for natural inter-mode conversations, as assessed through human evaluation. Our findings also suggest that the performance gap between few-shot prompting for TOD and supervised task-specific models is narrowing.
Anthology ID:
2024.sigdial-1.50
Volume:
Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Month:
September
Year:
2024
Address:
Kyoto, Japan
Editors:
Tatsuya Kawahara, Vera Demberg, Stefan Ultes, Koji Inoue, Shikib Mehri, David Howcroft, Kazunori Komatani
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
590–602
Language:
URL:
https://aclanthology.org/2024.sigdial-1.50
DOI:
Bibkey:
Cite (ACL):
Armand Stricker and Patrick Paroubek. 2024. A Few-shot Approach to Task-oriented Dialogue Enhanced with Chitchat. In Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 590–602, Kyoto, Japan. Association for Computational Linguistics.
Cite (Informal):
A Few-shot Approach to Task-oriented Dialogue Enhanced with Chitchat (Stricker & Paroubek, SIGDIAL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.sigdial-1.50.pdf