2023
pdf
bib
abs
SPM: A Split-Parsing Method for Joint Multi-Intent Detection and Slot Filling
Sheng Jiang
|
Su Zhu
|
Ruisheng Cao
|
Qingliang Miao
|
Kai Yu
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)
In a task-oriented dialogue system, joint intent detection and slot filling for multi-intent utterances become meaningful since users tend to query more. The current state-of-the-art studies choose to process multi-intent utterances through a single joint model of sequence labelling and multi-label classification, which cannot generalize to utterances with more intents than training samples. Meanwhile, it lacks the ability to assign slots to each corresponding intent. To overcome these problems, we propose a Split-Parsing Method (SPM) for joint multiple intent detection and slot filling, which is a two-stage method. It first splits an input sentence into multiple sub-sentences which contain a single-intent, and then a joint single intent detection and slot filling model is applied to parse each sub-sentence recurrently. Finally, we integrate the parsed results. The sub-sentence split task is also treated as a sequence labelling problem with only one entity-label, which can effectively generalize to a sentence with more intents unseen in the training set. Experimental results on three multi-intent datasets show that our method obtains substantial improvements over different baselines.
2022
pdf
bib
abs
The AISP-SJTU Translation System for WMT 2022
Guangfeng Liu
|
Qinpei Zhu
|
Xingyu Chen
|
Renjie Feng
|
Jianxin Ren
|
Renshou Wu
|
Qingliang Miao
|
Rui Wang
|
Kai Yu
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes AISP-SJTU’s participation in WMT 2022 shared general MT task. In this shared task, we participated in four translation directions: English-Chinese, Chinese-English, English-Japanese and Japanese-English. Our systems are based on the Transformer architecture with several novel and effective variants, including network depth and internal structure. In our experiments, we employ data filtering, large-scale back-translation, knowledge distillation, forward-translation, iterative in-domain knowledge finetune and model ensemble. The constrained systems achieve 48.8, 29.7, 39.3 and 22.0 case-sensitive BLEU scores on EN-ZH, ZH-EN, EN-JA and JA-EN, respectively.
pdf
bib
abs
The AISP-SJTU Simultaneous Translation System for IWSLT 2022
Qinpei Zhu
|
Renshou Wu
|
Guangfeng Liu
|
Xinyu Zhu
|
Xingyu Chen
|
Yang Zhou
|
Qingliang Miao
|
Rui Wang
|
Kai Yu
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
This paper describes AISP-SJTU’s submissions for the IWSLT 2022 Simultaneous Translation task. We participate in the text-to-text and speech-to-text simultaneous translation from English to Mandarin Chinese. The training of the CAAT is improved by training across multiple values of right context window size, which achieves good online performance without setting a prior right context window size for training. For speech-to-text task, the best model we submitted achieves 25.87, 26.21, 26.45 BLEU in low, medium and high regimes on tst-COMMON, corresponding to 27.94, 28.31, 28.43 BLEU in text-to-text task.
2016
pdf
bib
Automatic Identifying Entity Type in Linked Data
Qingliang Miao
|
Ruiyu Fang
|
Shuangyong Song
|
Zhongguang Zheng
|
Lu Fang
|
Yao Meng
|
Jun Sun
Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation: Posters
2013
pdf
bib
Cross-Lingual Link Discovery between Chinese and English Wiki Knowledge Bases
Qingliang Miao
|
Huayu Lu
|
Shu Zhang
|
Yao Meng
Proceedings of the 27th Pacific Asia Conference on Language, Information, and Computation (PACLIC 27)
2012
pdf
bib
Extracting and Visualizing Semantic Relationships from Chinese Biomedical Text
Qingliang Miao
|
Shu Zhang
|
Bo Zhang
|
Hao Yu
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation