Learning to Adapt Large Language Models to One-Shot In-Context Intent Classification on Unseen Domains

Joongbo Shin, Youbin Ahn, Seungpil Won, Stanley Jungkyu Choi


Abstract
In this paper, we explore one-shot in-context intent classification using large language models (LLMs) with the goal of minimizing the effort required to adapt models to unseen domains. To enhance the one-shot in-context learning capabilities of LLMs, we employ in-context tuning, leveraging its cross-domain transferability to unseen domains.To this end, we introduce the IC-collection, a compilation of open-source intent classification datasets from diverse domains, which are meticulously divided into held-in and held-out datasets.Our experiments demonstrate the effectiveness of the proposed method, showing that our model, with only 7B parameters, not only outperforms GPT-4 on intent classification but also achieves state-of-the-art in unseen domains with only one-shot demonstrations.Both our benchmark and model will be made publicly available to advance research in the chatbot systems.
Anthology ID:
2024.customnlp4u-1.15
Volume:
Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Application, Group, or Individual (CustomNLP4U)
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Sachin Kumar, Vidhisha Balachandran, Chan Young Park, Weijia Shi, Shirley Anugrah Hayati, Yulia Tsvetkov, Noah Smith, Hannaneh Hajishirzi, Dongyeop Kang, David Jurgens
Venue:
CustomNLP4U
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
182–197
Language:
URL:
https://aclanthology.org/2024.customnlp4u-1.15
DOI:
Bibkey:
Cite (ACL):
Joongbo Shin, Youbin Ahn, Seungpil Won, and Stanley Jungkyu Choi. 2024. Learning to Adapt Large Language Models to One-Shot In-Context Intent Classification on Unseen Domains. In Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Application, Group, or Individual (CustomNLP4U), pages 182–197, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Learning to Adapt Large Language Models to One-Shot In-Context Intent Classification on Unseen Domains (Shin et al., CustomNLP4U 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.customnlp4u-1.15.pdf