Xuefei Li
2024
Sequential and Repetitive Pattern Learning for Temporal Knowledge Graph Reasoning
Xuefei Li
|
Huiwei Zhou
|
Weihong Yao
|
Wenchu Li
|
Yingyu Lin
|
Lei Du
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Temporal Knowledge Graph (TKG) reasoning has received a growing interest recently, especially in forecasting the future facts based on the historical KG sequences. Existing studies typically utilize a recurrent neural network to learn the evolutional representations of entities for temporal reasoning. However, these methods are hard to capture the complex temporal evolutional patterns such as sequential and repetitive patterns accurately. To tackle this challenge, we propose a novel Sequential and Repetitive Pattern Learning (SRPL) method, which comprehensively captures both the sequential and repetitive patterns. Specifically, a Dependency-aware Sequential Pattern Learning (DSPL) component expresses the temporal dependencies of each historical timestamp as embeddings for accurately capturing the sequential patterns across temporally adjacent facts. A Time-interval guided Repetitive Pattern Learning (TRPL) component models the irregular time intervals between historical repetitive facts for capturing the repetitive patterns. Extensive experiments on four representative benchmarks demonstrate that our proposed method outperforms state-of-the-art methods in all metrics by an obvious margin, especially on GDELT dataset, where performance improvement of MRR reaches up to 18.84%.
2019
DUT-NLP at MEDIQA 2019: An Adversarial Multi-Task Network to Jointly Model Recognizing Question Entailment and Question Answering
Huiwei Zhou
|
Xuefei Li
|
Weihong Yao
|
Chengkun Lang
|
Shixian Ning
Proceedings of the 18th BioNLP Workshop and Shared Task
In this paper, we propose a novel model called Adversarial Multi-Task Network (AMTN) for jointly modeling Recognizing Question Entailment (RQE) and medical Question Answering (QA) tasks. AMTN utilizes a pre-trained BioBERT model and an Interactive Transformer to learn the shared semantic representations across different task through parameter sharing mechanism. Meanwhile, an adversarial training strategy is introduced to separate the private features of each task from the shared representations. Experiments on BioNLP 2019 RQE and QA Shared Task datasets show that our model benefits from the shared representations of both tasks provided by multi-task learning and adversarial training, and obtains significant improvements upon the single-task models.
Search
Co-authors
- Huiwei Zhou 2
- Weihong Yao 2
- Chengkun Lang 1
- Shixian Ning 1
- Wenchu Li 1
- show all...
- Yingyu Lin 1
- Lei Du 1