Learning to Learn End-to-End Goal-Oriented Dialog From Related Dialog Tasks

Janarthanan Rajendran, Jonathan K. Kummerfeld, Satinder Baveja


Abstract
For each goal-oriented dialog task of interest, large amounts of data need to be collected for end-to-end learning of a neural dialog system. Collecting that data is a costly and time-consuming process. Instead, we show that we can use only a small amount of data, supplemented with data from a related dialog task. Naively learning from related data fails to improve performance as the related data can be inconsistent with the target task. We describe a meta-learning based method that selectively learns from the related dialog task data. Our approach leads to significant accuracy improvements in an example dialog task.
Anthology ID:
2021.nlp4convai-1.16
Volume:
Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI
Month:
November
Year:
2021
Address:
Online
Editors:
Alexandros Papangelis, Paweł Budzianowski, Bing Liu, Elnaz Nouri, Abhinav Rastogi, Yun-Nung Chen
Venue:
NLP4ConvAI
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
163–178
Language:
URL:
https://aclanthology.org/2021.nlp4convai-1.16
DOI:
10.18653/v1/2021.nlp4convai-1.16
Bibkey:
Cite (ACL):
Janarthanan Rajendran, Jonathan K. Kummerfeld, and Satinder Baveja. 2021. Learning to Learn End-to-End Goal-Oriented Dialog From Related Dialog Tasks. In Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI, pages 163–178, Online. Association for Computational Linguistics.
Cite (Informal):
Learning to Learn End-to-End Goal-Oriented Dialog From Related Dialog Tasks (Rajendran et al., NLP4ConvAI 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.nlp4convai-1.16.pdf