2024
pdf
bib
abs
Learning to Adapt Large Language Models to One-Shot In-Context Intent Classification on Unseen Domains
Joongbo Shin
|
Youbin Ahn
|
Seungpil Won
|
Stanley Jungkyu Choi
Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Application, Group, or Individual (CustomNLP4U)
In this paper, we explore one-shot in-context intent classification using large language models (LLMs) with the goal of minimizing the effort required to adapt models to unseen domains. To enhance the one-shot in-context learning capabilities of LLMs, we employ in-context tuning, leveraging its cross-domain transferability to unseen domains.To this end, we introduce the IC-collection, a compilation of open-source intent classification datasets from diverse domains, which are meticulously divided into held-in and held-out datasets.Our experiments demonstrate the effectiveness of the proposed method, showing that our model, with only 7B parameters, not only outperforms GPT-4 on intent classification but also achieves state-of-the-art in unseen domains with only one-shot demonstrations.Both our benchmark and model will be made publicly available to advance research in the chatbot systems.
pdf
bib
abs
Exploring the Use of Natural Language Descriptions of Intents for Large Language Models in Zero-shot Intent Classification
Taesuk Hong
|
Youbin Ahn
|
Dongkyu Lee
|
Joongbo Shin
|
Seungpil Won
|
Janghoon Han
|
Stanley Jungkyu Choi
|
Jungyun Seo
Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue
In task-oriented dialogue systems, intent classification is crucial for accurately understanding user queries and providing appropriate services. This study explores the use of intent descriptions with large language models for unseen domain intent classification. By examining the effects of description quality, quantity, and input length management, we identify practical guidelines for optimizing performance. Our experiments using FLAN-T5 3B demonstrate that 1) high-quality descriptions for both training and testing significantly improve accuracy, 2) diversity in training descriptions doesn’t greatly affect performance, and 3) off-the-shelf rankers selecting around ten intent options reduce input length without compromising performance. We emphasize that high-quality testing descriptions have a greater impact on accuracy than training descriptions. These findings provide practical guidelines for using intent descriptions with large language models to achieve effective and efficient intent classification in low-resource settings.
2023
pdf
bib
abs
Persona Expansion with Commonsense Knowledge for Diverse and Consistent Response Generation
Donghyun Kim
|
Youbin Ahn
|
Wongyu Kim
|
Chanhee Lee
|
Kyungchan Lee
|
Kyong-Ho Lee
|
Jeonguk Kim
|
Donghoon Shin
|
Yeonsoo Lee
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Generating diverse and consistent responses is the ultimate goal of a persona-based dialogue. Although many studies have been conducted, the generated responses tend to be generic and bland due to the personas’ limited descriptiveness. Therefore, it is necessary to expand the given personas for more attractive responses. However, indiscriminate expansion of personas threaten the consistency of responses and therefore reduce the interlocutor’s interest in conversation. To alleviate this issue, we propose a consistent persona expansion framework that improves not only the diversity but also the consistency of persona-based responses. To do so, we define consistency criteria to avoid possible contradictions among personas as follows: 1) Intra-Consistency and 2) Inter-Consistency. Then, we construct a silver profile dataset to deliver the ability to conform with the consistency criteria to the expansion model. Finally, we propose a persona expansion model with an encoder-decoder structure, which considers the relatedness and consistency among personas. Our experiments on the Persona-Chat dataset demonstrate the superiority of the proposed framework.
pdf
bib
abs
CLICK: Contrastive Learning for Injecting Contextual Knowledge to Conversational Recommender System
Hyeongjun Yang
|
Heesoo Won
|
Youbin Ahn
|
Kyong-Ho Lee
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Conversational recommender systems (CRSs) capture a user preference through a conversation. However, the existing CRSs lack capturing comprehensive user preferences. This is because the items mentioned in a conversation are mainly regarded as a user preference. Thus, they have limitations in identifying a user preference from a dialogue context expressed without preferred items. Inspired by the characteristic of an online recommendation community where participants identify a context of a recommendation request and then comment with appropriate items, we exploit the Reddit data. Specifically, we propose a Contrastive Learning approach for Injecting Contextual Knowledge (CLICK) from the Reddit data to the CRS task, which facilitates the capture of a context-level user preference from a dialogue context, regardless of the existence of preferred item-entities. Moreover, we devise a relevance-enhanced contrastive learning loss to consider the fine-grained reflection of multiple recommendable items. We further develop a response generation module to generate a persuasive rationale for a recommendation. Extensive experiments on the benchmark CRS dataset show the effectiveness of CLICK, achieving significant improvements over state-of-the-art methods.
pdf
bib
abs
Concept-based Persona Expansion for Improving Diversity of Persona-Grounded Dialogue
Donghyun Kim
|
Youbin Ahn
|
Chanhee Lee
|
Wongyu Kim
|
Kyong-Ho Lee
|
Donghoon Shin
|
Yeonsoo Lee
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
A persona-grounded dialogue model aims to improve the quality of responses to promote user engagement. However, because the given personas are mostly short and limited to only a few informative words, it is challenging to utilize them to generate diverse responses. To tackle this problem, we propose a novel persona expansion framework, Concept-based Persona eXpansion (CPX). CPX takes the original persona as input and generates expanded personas that contain conceptually rich content. We constitute CPX with two task modules: 1) Concept Extractor and 2) Sentence Generator. To train these modules, we exploit the duality of two tasks with a commonsense dataset consisting of a concept set and the corresponding sentences which contain the given concepts. Extensive experiments on persona expansion and response generation show that our work sufficiently contributes to improving the quality of responses in diversity and richness.
2022
pdf
bib
abs
Emp-RFT: Empathetic Response Generation via Recognizing Feature Transitions between Utterances
Wongyu Kim
|
Youbin Ahn
|
Donghyun Kim
|
Kyong-Ho Lee
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Each utterance in multi-turn empathetic dialogues has features such as emotion, keywords, and utterance-level meaning. Feature transitions between utterances occur naturally. However, existing approaches fail to perceive the transitions because they extract features for the context at the coarse-grained level. To solve the above issue, we propose a novel approach of recognizing feature transitions between utterances, which helps understand the dialogue flow and better grasp the features of utterance that needs attention. Also, we introduce a response generation strategy to help focus on emotion and keywords related to appropriate features when generating responses. Experimental results show that our approach outperforms baselines and especially, achieves significant improvements on multi-turn dialogues.