Wesley Rose
2023
NatCS: Eliciting Natural Customer Support Dialogues
James Gung
|
Emily Moeng
|
Wesley Rose
|
Arshit Gupta
|
Yi Zhang
|
Saab Mansour
Findings of the Association for Computational Linguistics: ACL 2023
Despite growing interest in applications based on natural customer support conversations,there exist remarkably few publicly available datasets that reflect the expected characteristics of conversations in these settings. Existing task-oriented dialogue datasets, which were collected to benchmark dialogue systems mainly in written human-to-bot settings, are not representative of real customer support conversations and do not provide realistic benchmarks for systems that are applied to natural data. To address this gap, we introduce NatCS, a multi-domain collection of spoken customer service conversations. We describe our process for collecting synthetic conversations between customers and agents based on natural language phenomena observed in real conversations. Compared to previous dialogue datasets, the conversations collected with our approach are more representative of real human-to-human conversations along multiple metrics. Finally, we demonstrate potential uses of NatCS, including dialogue act classification and intent induction from conversations as potential applications, showing that dialogue act annotations in NatCS provide more effective training data for modeling real conversations compared to existing synthetic written datasets. We publicly release NatCS to facilitate research in natural dialog systems
Intent Induction from Conversations for Task-Oriented Dialogue Track at DSTC 11
James Gung
|
Raphael Shu
|
Emily Moeng
|
Wesley Rose
|
Salvatore Romeo
|
Arshit Gupta
|
Yassine Benajiba
|
Saab Mansour
|
Yi Zhang
Proceedings of The Eleventh Dialog System Technology Challenge
With increasing demand for and adoption of virtual assistants, recent work has investigated ways to accelerate bot schema design through the automatic induction of intents or the induction of slots and dialogue states. However, a lack of dedicated benchmarks and standardized evaluation has made progress difficult to track and comparisons between systems difficult to make. This challenge track, held as part of the Eleventh Dialog Systems Technology Challenge, introduces a benchmark that aims to evaluate methods for the automatic induction of customer intents in a realistic setting of customer service interactions between human agents and customers. We propose two subtasks for progressively tackling the automatic induction of intents and corresponding evaluation methodologies. We then present three datasets suitable for evaluating the tasks and propose simple baselines. Finally, we summarize the submissions and results of the challenge track, for which we received submissions from 34 teams.
Search
Co-authors
- James Gung 2
- Emily Moeng 2
- Arshit Gupta 2
- Yi Zhang 2
- Saab Mansour 2
- show all...