2022
pdf
bib
abs
Multi-Party Empathetic Dialogue Generation: A New Task for Dialog Systems
Ling.Yu Zhu
|
Zhengkun Zhang
|
Jun Wang
|
Hongbin Wang
|
Haiying Wu
|
Zhenglu Yang
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Empathetic dialogue assembles emotion understanding, feeling projection, and appropriate response generation. Existing work for empathetic dialogue generation concentrates on the two-party conversation scenario. Multi-party dialogues, however, are pervasive in reality. Furthermore, emotion and sensibility are typically confused; a refined empathy analysis is needed for comprehending fragile and nuanced human feelings. We address these issues by proposing a novel task called Multi-Party Empathetic Dialogue Generation in this study. Additionally, a Static-Dynamic model for Multi-Party Empathetic Dialogue Generation, SDMPED, is introduced as a baseline by exploring the static sensibility and dynamic emotion for the multi-party empathetic dialogue learning, the aspects that help SDMPED achieve the state-of-the-art performance.
pdf
bib
abs
KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering
Jianing Wang
|
Chengyu Wang
|
Minghui Qiu
|
Qiuhui Shi
|
Hongbin Wang
|
Jun Huang
|
Ming Gao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Extractive Question Answering (EQA) is one of the most essential tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs). However, most existing approaches for MRC may perform poorly in the few-shot learning scenario. To solve this issue, we propose a novel framework named Knowledge Enhanced Contrastive Prompt-tuning (KECP). Instead of adding pointer heads to PLMs, we introduce a seminal paradigm for EQA that transforms the task into a non-autoregressive Masked Language Modeling (MLM) generation problem. Simultaneously, rich semantics from the external knowledge base (KB) and the passage context support enhancing the query’s representations. In addition, to boost the performance of PLMs, we jointly train the model by the MLM and contrastive learning objectives. Experiments on multiple benchmarks demonstrate that our method consistently outperforms state-of-the-art approaches in few-shot settings by a large margin.
pdf
bib
abs
Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding
Jianing Wang
|
Wenkang Huang
|
Minghui Qiu
|
Qiuhui Shi
|
Hongbin Wang
|
Xiang Li
|
Ming Gao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Knowledge-enhanced Pre-trained Language Model (PLM) has recently received significant attention, which aims to incorporate factual knowledge into PLMs. However, most existing methods modify the internal structures of fixed types of PLMs by stacking complicated modules, and introduce redundant and irrelevant factual knowledge from knowledge bases (KBs). In this paper, to address these problems, we introduce a seminal knowledge prompting paradigm and further propose a knowledge-prompting-based PLM framework KP-PLM. This framework can be flexibly combined with existing mainstream PLMs. Specifically, we first construct a knowledge sub-graph from KBs for each context. Then we design multiple continuous prompts rules and transform the knowledge sub-graph into natural language prompts. To further leverage the factual knowledge from these prompts, we propose two novel knowledge-aware self-supervised tasks including prompt relevance inspection and masked prompt modeling. Extensive experiments on multiple natural language understanding (NLU) tasks show the superiority of KP-PLM over other state-of-the-art methods in both full-resource and low-resource settings. Our source codes will be released upon the acceptance of the paper.
pdf
bib
abs
A Span-based Multimodal Variational Autoencoder for Semi-supervised Multimodal Named Entity Recognition
Baohang Zhou
|
Ying Zhang
|
Kehui Song
|
Wenya Guo
|
Guoqing Zhao
|
Hongbin Wang
|
Xiaojie Yuan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Multimodal named entity recognition (MNER) on social media is a challenging task which aims to extract named entities in free text and incorporate images to classify them into user-defined types. However, the annotation for named entities on social media demands a mount of human efforts. The existing semi-supervised named entity recognition methods focus on the text modal and are utilized to reduce labeling costs in traditional NER. However, the previous methods are not efficient for semi-supervised MNER. Because the MNER task is defined to combine the text information with image one and needs to consider the mismatch between the posted text and image. To fuse the text and image features for MNER effectively under semi-supervised setting, we propose a novel span-based multimodal variational autoencoder (SMVAE) model for semi-supervised MNER. The proposed method exploits modal-specific VAEs to model text and image latent features, and utilizes product-of-experts to acquire multimodal features. In our approach, the implicit relations between labels and multimodal features are modeled by multimodal VAE. Thus, the useful information of unlabeled data can be exploited in our method under semi-supervised setting. Experimental results on two benchmark datasets demonstrate that our approach not only outperforms baselines under supervised setting, but also improves MNER performance with less labeled data than existing semi-supervised methods.
pdf
bib
abs
基于注意力的蒙古语说话人特征提取方法(Attention based Mongolian Speaker Feature Extraction)
Fangyuan Zhu (朱方圆)
|
Zhiqiang Ma (马志强)
|
Zhiqiang Liu (刘志强)
|
Caijilahu Bao (宝财吉拉呼)
|
Hongbin Wang (王洪彬)
Proceedings of the 21st Chinese National Conference on Computational Linguistics
“说话人特征提取模型提取到的说话人特征之间区分性低,使得蒙古语声学模型无法学习到区分性信息,导致模型无法适应不同说话人。提出一种基于注意力的说话人自适应方法,方法引入神经图灵机进行自适应,增加记忆模块存放说话人特征,采用注意力机制计算记忆模块中说话人特征与当前语音说话人特征的相似权重矩阵,通过权重矩阵重新组合成说话人特征s-vector,进而提高说话人特征之间的区分性。在IMUT-MCT数据集上,进行说话人特征提取方法的消融实验、模型自适应实验和案例分析。实验结果表明,对比不同说话人特征s-vector、i-vector与d-vector,s-vector比其他两种方法的SER和WER分别降低4.96%、1.08%;在不同的蒙古语声学模型上进行比较,提出的方法相对于基线均有性能提升。”
2021
pdf
bib
A Dialogue-based Information Extraction System for Medical Insurance Assessment
Shuang Peng
|
Mengdi Zhou
|
Minghui Yang
|
Haitao Mi
|
Shaosheng Cao
|
Zujie Wen
|
Teng Xu
|
Hongbin Wang
|
Lei Liu
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
pdf
bib
abs
Incorporating Circumstances into Narrative Event Prediction
Shichao Wang
|
Xiangrui Cai
|
HongBin Wang
|
Xiaojie Yuan
Findings of the Association for Computational Linguistics: EMNLP 2021
The narrative event prediction aims to predict what happens after a sequence of events, which is essential to modeling sophisticated real-world events. Existing studies focus on mining the inter-events relationships while ignoring how the events happened, which we called circumstances. With our observation, the event circumstances indicate what will happen next. To incorporate event circumstances into the narrative event prediction, we propose the CircEvent, which adopts the two multi-head attention to retrieve circumstances at the local and global levels. We also introduce a regularization of attention weights to leverage the alignment between events and local circumstances. The experimental results demonstrate our CircEvent outperforms existing baselines by 12.2%. The further analysis demonstrates the effectiveness of our multi-head attention modules and regularization.