SQATIN: Supervised Instruction Tuning Meets Question Answering for Improved Dialogue NLU

Evgeniia Razumovskaia, Goran Glavaš, Anna Korhonen, Ivan Vulić


Abstract
Task-oriented dialogue (TOD) systems help users execute well-defined tasks across a variety of domains (e.g., flight booking or food ordering), with their Natural Language Understanding (NLU) components being dedicated to the analysis of user utterances, predicting users’ intents (Intent Detection, ID) and extracting values for informational slots (Value Extraction, VE). In most domains, labelled NLU data is scarce, making sample-efficient learning – enabled with effective transfer paradigms – paramount. In this work, we introduce SQATIN, a new framework for dialog NLU based on (i) instruction tuning and (ii) question-answering-based formulation of ID and VE tasks. According to the evaluation on established NLU benchmarks, SQATIN sets the new state of the art in dialogue NLU, substantially surpassing the performance of current models based on standard fine-tuning objectives in both in-domain training and cross-domain transfer, and it also surpasses off-the-shelf large language models for the same task, both in terms of performance and inference efficiency. Furthermore, SQATIN yields particularly large performance gains in cross-domain transfer, owing to the fact that our QA-based instruction tuning leverages similarities between natural language descriptions of classes (i.e., slots and intents) across domains.
Anthology ID:
2024.naacl-long.453
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8188–8204
Language:
URL:
https://aclanthology.org/2024.naacl-long.453
DOI:
Bibkey:
Cite (ACL):
Evgeniia Razumovskaia, Goran Glavaš, Anna Korhonen, and Ivan Vulić. 2024. SQATIN: Supervised Instruction Tuning Meets Question Answering for Improved Dialogue NLU. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 8188–8204, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
SQATIN: Supervised Instruction Tuning Meets Question Answering for Improved Dialogue NLU (Razumovskaia et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.453.pdf
Copyright:
 2024.naacl-long.453.copyright.pdf