2021
pdf
bib
abs
Alexa Conversations: An Extensible Data-driven Approach for Building Task-oriented Dialogue Systems
Anish Acharya
|
Suranjit Adhikari
|
Sanchit Agarwal
|
Vincent Auvray
|
Nehal Belgamwar
|
Arijit Biswas
|
Shubhra Chandra
|
Tagyoung Chung
|
Maryam Fazel-Zarandi
|
Raefer Gabriel
|
Shuyang Gao
|
Rahul Goel
|
Dilek Hakkani-Tur
|
Jan Jezabek
|
Abhay Jha
|
Jiun-Yu Kao
|
Prakash Krishnan
|
Peter Ku
|
Anuj Goyal
|
Chien-Wei Lin
|
Qing Liu
|
Arindam Mandal
|
Angeliki Metallinou
|
Vishal Naik
|
Yi Pan
|
Shachi Paul
|
Vittorio Perera
|
Abhishek Sethi
|
Minmin Shen
|
Nikko Strom
|
Eddie Wang
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations
Traditional goal-oriented dialogue systems rely on various components such as natural language understanding, dialogue state tracking, policy learning and response generation. Training each component requires annotations which are hard to obtain for every new domain, limiting scalability of such systems. Similarly, rule-based dialogue systems require extensive writing and maintenance of rules and do not scale either. End-to-End dialogue systems, on the other hand, do not require module-specific annotations but need a large amount of data for training. To overcome these problems, in this demo, we present Alexa Conversations, a new approach for building goal-oriented dialogue systems that is scalable, extensible as well as data efficient. The components of this system are trained in a data-driven manner, but instead of collecting annotated conversations for training, we generate them using a novel dialogue simulator based on a few seed dialogues and specifications of APIs and entities provided by the developer. Our approach provides out-of-the-box support for natural conversational phenomenon like entity sharing across turns or users changing their mind during conversation without requiring developers to provide any such dialogue flows. We exemplify our approach using a simple pizza ordering task and showcase its value in reducing the developer burden for creating a robust experience. Finally, we evaluate our system using a typical movie ticket booking task integrated with live APIs and show that the dialogue simulator is an essential component of the system that leads to over 50% improvement in turn-level action signature prediction accuracy.
2019
pdf
bib
abs
Towards Coherent and Engaging Spoken Dialog Response Generation Using Automatic Conversation Evaluators
Sanghyun Yi
|
Rahul Goel
|
Chandra Khatri
|
Alessandra Cervone
|
Tagyoung Chung
|
Behnam Hedayatnia
|
Anu Venkatesh
|
Raefer Gabriel
|
Dilek Hakkani-Tur
Proceedings of the 12th International Conference on Natural Language Generation
Encoder-decoder based neural architectures serve as the basis of state-of-the-art approaches in end-to-end open domain dialog systems. Since most of such systems are trained with a maximum likelihood (MLE) objective they suffer from issues such as lack of generalizability and the generic response problem, i.e., a system response that can be an answer to a large number of user utterances, e.g., “Maybe, I don’t know.” Having explicit feedback on the relevance and interestingness of a system response at each turn can be a useful signal for mitigating such issues and improving system quality by selecting responses from different approaches. Towards this goal, we present a system that evaluates chatbot responses at each dialog turn for coherence and engagement. Our system provides explicit turn-level dialog quality feedback, which we show to be highly correlated with human evaluation. To show that incorporating this feedback in the neural response generation models improves dialog quality, we present two different and complementary mechanisms to incorporate explicit feedback into a neural response generation model: reranking and direct modification of the loss function during training. Our studies show that a response generation model that incorporates these combined feedback mechanisms produce more engaging and coherent responses in an open-domain spoken dialog setting, significantly improving the response quality using both automatic and human evaluation.
pdf
bib
abs
Natural Language Generation at Scale: A Case Study for Open Domain Question Answering
Alessandra Cervone
|
Chandra Khatri
|
Rahul Goel
|
Behnam Hedayatnia
|
Anu Venkatesh
|
Dilek Hakkani-Tur
|
Raefer Gabriel
Proceedings of the 12th International Conference on Natural Language Generation
Current approaches to Natural Language Generation (NLG) for dialog mainly focus on domain-specific, task-oriented applications (e.g. restaurant booking) using limited ontologies (up to 20 slot types), usually without considering the previous conversation context. Furthermore, these approaches require large amounts of data for each domain, and do not benefit from examples that may be available for other domains. This work explores the feasibility of applying statistical NLG to scenarios requiring larger ontologies, such as multi-domain dialog applications or open-domain question answering (QA) based on knowledge graphs. We model NLG through an Encoder-Decoder framework using a large dataset of interactions between real-world users and a conversational agent for open-domain QA. First, we investigate the impact of increasing the number of slot types on the generation quality and experiment with different partitions of the QA data with progressively larger ontologies (up to 369 slot types). Second, we perform multi-task learning experiments between open-domain QA and task-oriented dialog, and benchmark our model on a popular NLG dataset. Moreover, we experiment with using the conversational context as an additional input to improve response generation quality. Our experiments show the feasibility of learning statistical NLG models for open-domain QA with larger ontologies.