Shuyu Lei


2021

pdf bib
Slot Transferability for Cross-domain Slot Filling
Hengtong Lu | Zhuoxin Han | Caixia Yuan | Xiaojie Wang | Shuyu Lei | Huixing Jiang | Wei Wu
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Task-Oriented Clustering for Dialogues
Chenxu Lv | Hengtong Lu | Shuyu Lei | Huixing Jiang | Wei Wu | Caixia Yuan | Xiaojie Wang
Findings of the Association for Computational Linguistics: EMNLP 2021

A reliable clustering algorithm for task-oriented dialogues can help developer analysis and define dialogue tasks efficiently. It is challenging to directly apply prior normal text clustering algorithms for task-oriented dialogues, due to the inherent differences between them, such as coreference, omission and diversity expression. In this paper, we propose a Dialogue Task Clustering Network model for task-oriented clustering. The proposed model combines context-aware utterance representations and cross-dialogue utterance cluster representations for task-oriented dialogues clustering. An iterative end-to-end training strategy is utilized for dialogue clustering and representation learning jointly. Experiments on three public datasets show that our model significantly outperform strong baselines in all metrics.

2020

pdf bib
Syntactic Graph Convolutional Network for Spoken Language Understanding
Keqing He | Shuyu Lei | Yushu Yang | Huixing Jiang | Zhongyuan Wang
Proceedings of the 28th International Conference on Computational Linguistics

Slot filling and intent detection are two major tasks for spoken language understanding. In most existing work, these two tasks are built as joint models with multi-task learning with no consideration of prior linguistic knowledge. In this paper, we propose a novel joint model that applies a graph convolutional network over dependency trees to integrate the syntactic structure for learning slot filling and intent detection jointly. Experimental results show that our proposed model achieves state-of-the-art performance on two public benchmark datasets and outperforms existing work. At last, we apply the BERT model to further improve the performance on both slot filling and intent detection.