Prompt Augmented Generative Replay via Supervised Contrastive Learning for Lifelong Intent Detection

Vaibhav Varshney, Mayur Patidar, Rajat Kumar, Lovekesh Vig, Gautam Shroff


Abstract
Identifying all possible user intents for a dialog system at design time is challenging even for skilled domain experts. For practical applications, novel intents may have to be inferred incrementally on the fly. This typically entails repeated retraining of the intent detector on both the existing and novel intents which can be expensive and would require storage of all past data corresponding to prior intents. In this paper, the objective is to continually train an intent detector on new intents while maintaining performance on prior intents without mandating access to prior intent data. Several data replay-based approaches have been introduced to avoid catastrophic forgetting during continual learning, including exemplar and generative replay. Current generative replay approaches struggle to generate representative samples because the generation is conditioned solely on the class/task label. Motivated by the recent work around prompt-based generation via pre-trained language models (PLMs), we employ generative replay using PLMs for incremental intent detection. Unlike exemplar replay, we only store the relevant contexts per intent in memory and use these stored contexts (with the class label) as prompts for generating intent-specific utterances. We use a common model for both generation and classification to promote optimal sharing of knowledge across both tasks. To further improve generation, we employ supervised contrastive fine-tuning of the PLM. Our proposed approach achieves state-of-the-art (SOTA) for lifelong intent detection on four public datasets and even outperforms exemplar replay-based approaches. The technique also achieves SOTA on a lifelong relation extraction task, suggesting that the approach is extendable to other continual learning tasks beyond intent detection.
Anthology ID:
2022.findings-naacl.84
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1113–1127
Language:
URL:
https://aclanthology.org/2022.findings-naacl.84
DOI:
10.18653/v1/2022.findings-naacl.84
Bibkey:
Cite (ACL):
Vaibhav Varshney, Mayur Patidar, Rajat Kumar, Lovekesh Vig, and Gautam Shroff. 2022. Prompt Augmented Generative Replay via Supervised Contrastive Learning for Lifelong Intent Detection. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1113–1127, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Prompt Augmented Generative Replay via Supervised Contrastive Learning for Lifelong Intent Detection (Varshney et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-naacl.84.pdf
Video:
 https://aclanthology.org/2022.findings-naacl.84.mp4
Data
BANKING77FewRelHWU64SGD