2024
pdf
bib
abs
Hyperbolic Graph Neural Network for Temporal Knowledge Graph Completion
Yancong Li
|
Xiaoming Zhang
|
Ying Cui
|
Shuai Ma
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Temporal Knowledge Graphs (TKGs) represent a crucial source of structured temporal information and exhibit significant utility in various real-world applications. However, TKGs are susceptible to incompleteness, necessitating Temporal Knowledge Graph Completion (TKGC) to predict missing facts. Existing models have encountered limitations in effectively capturing the intricate temporal dynamics and hierarchical relations within TKGs. To address these challenges, HyGNet is proposed, leveraging hyperbolic geometry to effectively model temporal knowledge graphs. The model comprises two components: the Hyperbolic Gated Graph Neural Network (HGGNN) and the Hyperbolic Convolutional Neural Network (HCNN). HGGNN aggregates neighborhood information in hyperbolic space, effectively capturing the contextual information and dependencies between entities. HCNN interacts with embeddings in hyperbolic space, effectively modeling the complex interactions between entities, relations, and timestamps. Additionally, a consistency loss is introduced to ensure smooth transitions in temporal embeddings. The extensive experimental results conducted on four benchmark datasets for TKGC highlight the effectiveness of HyGNet. It achieves state-of-the-art performance in comparison to previous models, showcasing its potential for real-world applications that involve temporal reasoning and knowledge prediction.
pdf
bib
abs
MDS: A Fine-Grained Dataset for Multi-Modal Dialogue Summarization
Zhipeng Liu
|
Xiaoming Zhang
|
Litian Zhang
|
Zelong Yu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Due to the explosion of various dialogue scenes, summarizing the dialogue into a short message has drawn much attention recently. In the multi-modal dialogue scene, people tend to use tone and body language to illustrate their intentions. While traditional dialogue summarization has predominantly focused on textual content, this approach may overlook vital visual and audio information essential for understanding multi-modal interactions. Recognizing the established field of multi-modal dialogue summarization, we develop a new multi-modal dialogue summarization dataset (MDS), which aims to enhance the variety and scope of data available for this research area. MDS provides a demanding testbed for multi-modal dialogue summarization. Subsequently, we conducted a comparative analysis of various summarization techniques on MDS and found that the existing methods tend to produce redundant and incoherent summaries. All of the models generate unfaithful facts to some degree, suggesting future research directions. MDS is available at https://github.com/R00kkie/MDS.
2021
pdf
bib
abs
Matching Distributions between Model and Data: Cross-domain Knowledge Distillation for Unsupervised Domain Adaptation
Bo Zhang
|
Xiaoming Zhang
|
Yun Liu
|
Lei Cheng
|
Zhoujun Li
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Unsupervised Domain Adaptation (UDA) aims to transfer the knowledge of source domain to the unlabeled target domain. Existing methods typically require to learn to adapt the target model by exploiting the source data and sharing the network architecture across domains. However, this pipeline makes the source data risky and is inflexible for deploying the target model. This paper tackles a novel setting where only a trained source model is available and different network architectures can be adapted for target domain in terms of deployment environments. We propose a generic framework named Cross-domain Knowledge Distillation (CdKD) without needing any source data. CdKD matches the joint distributions between a trained source model and a set of target data during distilling the knowledge from the source model to the target domain. As a type of important knowledge in the source domain, for the first time, the gradient information is exploited to boost the transfer performance. Experiments on cross-domain text classification demonstrate that CdKD achieves superior performance, which verifies the effectiveness in this novel setting.
2019
pdf
bib
abs
Hierarchy Response Learning for Neural Conversation Generation
Bo Zhang
|
Xiaoming Zhang
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
The neural encoder-decoder models have shown great promise in neural conversation generation. However, they cannot perceive and express the intention effectively, and hence often generate dull and generic responses. Unlike past work that has focused on diversifying the output at word-level or discourse-level with a flat model to alleviate this problem, we propose a hierarchical generation model to capture the different levels of diversity using the conditional variational autoencoders. Specifically, a hierarchical response generation (HRG) framework is proposed to capture the conversation intention in a natural and coherent way. It has two modules, namely, an expression reconstruction model to capture the hierarchical correlation between expression and intention, and an expression attention model to effectively combine the expressions with contents. Finally, the training procedure of HRG is improved by introducing reconstruction loss. Experiment results show that our model can generate the responses with more appropriate content and expression.
2018
pdf
bib
abs
Keyphrase Generation with Correlation Constraints
Jun Chen
|
Xiaoming Zhang
|
Yu Wu
|
Zhao Yan
|
Zhoujun Li
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
In this paper, we study automatic keyphrase generation. Although conventional approaches to this task show promising results, they neglect correlation among keyphrases, resulting in duplication and coverage issues. To solve these problems, we propose a new sequence-to-sequence architecture for keyphrase generation named CorrRNN, which captures correlation among multiple keyphrases in two ways. First, we employ a coverage vector to indicate whether the word in the source document has been summarized by previous phrases to improve the coverage for keyphrases. Second, preceding phrases are taken into account to eliminate duplicate phrases and improve result coherence. Experiment results show that our model significantly outperforms the state-of-the-art method on benchmark datasets in terms of both accuracy and diversity.
2017
pdf
bib
abs
Chinese Answer Extraction Based on POS Tree and Genetic Algorithm
Shuihua Li
|
Xiaoming Zhang
|
Zhoujun Li
Proceedings of the 9th SIGHAN Workshop on Chinese Language Processing
Answer extraction is the most important part of a chinese web-based question answering system. In order to enhance the robustness and adaptability of answer extraction to new domains and eliminate the influence of the incomplete and noisy search snippets, we propose two new answer exraction methods. We utilize text patterns to generate Part-of-Speech (POS) patterns. In addition, a method is proposed to construct a POS tree by using these POS patterns. The POS tree is useful to candidate answer extraction of web-based question answering. To retrieve a efficient POS tree, the similarities between questions are used to select the question-answer pairs whose questions are similar to the unanswered question. Then, the POS tree is improved based on these question-answer pairs. In order to rank these candidate answers, the weights of the leaf nodes of the POS tree are calculated using a heuristic method. Moreover, the Genetic Algorithm (GA) is used to train the weights. The experimental results of 10-fold crossvalidation show that the weighted POS tree trained by GA can improve the accuracy of answer extraction.
2012
pdf
bib
A Semi-Supervised Bayesian Network Model for Microblog Topic Classification
Yan Chen
|
Zhoujun Li
|
Liqiang Nie
|
Xia Hu
|
Xiangyu Wang
|
Tat-Seng Chua
|
Xiaoming Zhang
Proceedings of COLING 2012