Slot Induction via Pre-trained Language Model Probing and Multi-level Contrastive Learning

Hoang Nguyen, Chenwei Zhang, Ye Liu, Philip Yu


Abstract
Recent advanced methods in Natural Language Understanding for Task-oriented Dialogue (TOD) Systems (e.g., intent detection and slot filling) require a large amount of annotated data to achieve competitive performance. In reality, token-level annotations (slot labels) are time-consuming and difficult to acquire. In this work, we study the Slot Induction (SI) task whose objective is to induce slot boundaries without explicit knowledge of token-level slot annotations. We propose leveraging Unsupervised Pre-trained Language Model (PLM) Probing and Contrastive Learning mechanism to exploit (1) unsupervised semantic knowledge extracted from PLM, and (2) additional sentence-level intent label signals available from TOD. Our approach is shown to be effective in SI task and capable of bridging the gaps with token-level supervised models on two NLU benchmark datasets. When generalized to emerging intents, our SI objectives also provide enhanced slot label representations, leading to improved performance on the Slot Filling tasks.
Anthology ID:
2023.sigdial-1.44
Volume:
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
Svetlana Stoyanchev, Shafiq Joty, David Schlangen, Ondrej Dusek, Casey Kennington, Malihe Alikhani
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
470–481
Language:
URL:
https://aclanthology.org/2023.sigdial-1.44
DOI:
10.18653/v1/2023.sigdial-1.44
Bibkey:
Cite (ACL):
Hoang Nguyen, Chenwei Zhang, Ye Liu, and Philip Yu. 2023. Slot Induction via Pre-trained Language Model Probing and Multi-level Contrastive Learning. In Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 470–481, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
Slot Induction via Pre-trained Language Model Probing and Multi-level Contrastive Learning (Nguyen et al., SIGDIAL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.sigdial-1.44.pdf