Pretrain-Finetune Based Training of Task-Oriented Dialogue Systems in a Real-World Setting

Manisha Srivastava, Yichao Lu, Riley Peschon, Chenyang Li


Abstract
One main challenge in building task-oriented dialogue systems is the limited amount of supervised training data available. In this work, we present a method for training retrieval-based dialogue systems using a small amount of high-quality, annotated data and a larger, unlabeled dataset. We show that pretraining using unlabeled data can bring better model performance with a 31% boost in Recall@1 compared with no pretraining. The proposed finetuning technique based on a small amount of high-quality, annotated data resulted in 26% offline and 33% online performance improvement in Recall@1 over the pretrained model. The model is deployed in an agent-support application and evaluated on live customer service contacts, providing additional insights into the real-world implications compared with most other publications in the domain often using asynchronous transcripts (e.g. Reddit data). The high performance of 74% Recall@1 shown in the customer service example demonstrates the effectiveness of this pretrain-finetune approach in dealing with the limited supervised data challenge.
Anthology ID:
2021.naacl-industry.5
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers
Month:
June
Year:
2021
Address:
Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
34–40
Language:
URL:
https://aclanthology.org/2021.naacl-industry.5
DOI:
10.18653/v1/2021.naacl-industry.5
Bibkey:
Cite (ACL):
Manisha Srivastava, Yichao Lu, Riley Peschon, and Chenyang Li. 2021. Pretrain-Finetune Based Training of Task-Oriented Dialogue Systems in a Real-World Setting. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers, pages 34–40, Online. Association for Computational Linguistics.
Cite (Informal):
Pretrain-Finetune Based Training of Task-Oriented Dialogue Systems in a Real-World Setting (Srivastava et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-industry.5.pdf
Video:
 https://aclanthology.org/2021.naacl-industry.5.mp4