Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog Systems

Fei Mi, Wanhao Zhou, Lingjing Kong, Fengyu Cai, Minlie Huang, Boi Faltings


Abstract
As the labeling cost for different modules in task-oriented dialog (ToD) systems is expensive, a major challenge is to train different modules with the least amount of labeled data. Recently, large-scale pre-trained language models, have shown promising results for few-shot learning in ToD. In this paper, we devise a self-training approach to utilize the abundant unlabeled dialog data to further improve state-of-the-art pre-trained models in few-shot learning scenarios for ToD systems. Specifically, we propose a self-training approach that iteratively labels the most confident unlabeled data to train a stronger Student model. Moreover, a new text augmentation technique (GradAug) is proposed to better train the Student by replacing non-crucial tokens using a masked language model. We conduct extensive experiments and present analyses on four downstream tasks in ToD, including intent classification, dialog state tracking, dialog act prediction, and response selection. Empirical results demonstrate that the proposed self-training approach consistently improves state-of-the-art pre-trained models (BERT, ToD-BERT) when only a small number of labeled data are available.
Anthology ID:
2021.emnlp-main.142
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1887–1898
Language:
URL:
https://aclanthology.org/2021.emnlp-main.142
DOI:
10.18653/v1/2021.emnlp-main.142
Bibkey:
Cite (ACL):
Fei Mi, Wanhao Zhou, Lingjing Kong, Fengyu Cai, Minlie Huang, and Boi Faltings. 2021. Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog Systems. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1887–1898, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog Systems (Mi et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.142.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.142.mp4
Code
 mifei/st-tod