The Devil is in the Details: On Models and Training Regimes for Few-Shot Intent Classification

Mohsen Mesgar, Thy Thy Tran, Goran Glavaš, Iryna Gurevych


Abstract
In task-oriented dialog (ToD) new intents emerge on regular basis, with a handful of available utterances at best. This renders effective Few-Shot Intent Classification (FSIC) a central challenge for modular ToD systems. Recent FSIC methods appear to be similar: they use pretrained language models (PLMs) to encode utterances and predominantly resort to nearest-neighbor-based inference. However, they also differ in major components: they start from different PLMs, use different encoding architectures and utterance similarity functions, and adopt different training regimes. Coupling of these vital components together with the lack of informative ablations prevents the identification of factors that drive the (reported) FSIC performance. We propose a unified framework to evaluate these components along the following key dimensions:(1) Encoding architectures: Cross-Encoder vs Bi-Encoders;(2) Similarity function: Parameterized (i.e., trainable) vs non-parameterized; (3) Training regimes: Episodic meta-learning vs conventional (i.e., non-episodic) training. Our experimental results on seven FSIC benchmarks reveal three new important findings. First, the unexplored combination of cross-encoder architecture and episodic meta-learning consistently yields the best FSIC performance. Second, episodic training substantially outperforms its non-episodic counterpart. Finally, we show that splitting episodes into support and query sets has a limited and inconsistent effect on performance. Our findings show the importance of ablations and fair comparisons in FSIC. We publicly release our code and data.
Anthology ID:
2023.eacl-main.135
Volume:
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1846–1857
Language:
URL:
https://aclanthology.org/2023.eacl-main.135
DOI:
10.18653/v1/2023.eacl-main.135
Bibkey:
Cite (ACL):
Mohsen Mesgar, Thy Thy Tran, Goran Glavaš, and Iryna Gurevych. 2023. The Devil is in the Details: On Models and Training Regimes for Few-Shot Intent Classification. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1846–1857, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
The Devil is in the Details: On Models and Training Regimes for Few-Shot Intent Classification (Mesgar et al., EACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.eacl-main.135.pdf
Video:
 https://aclanthology.org/2023.eacl-main.135.mp4