Jian Yao
2024
Joint Pre-Encoding Representation and Structure Embedding for Efficient and Low-Resource Knowledge Graph Completion
Chenyu Qiu
|
Pengjiang Qian
|
Chuang Wang
|
Jian Yao
|
Li Liu
|
Fang Wei
|
Eddie Y.k. Eddie
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Knowledge graph completion (KGC) aims to infer missing or incomplete parts in knowledge graph. The existing models are generally divided into structure-based and description-based models, among description-based models often require longer training and inference times as well as increased memory usage. In this paper, we propose Pre-Encoded Masked Language Model (PEMLM) to efficiently solve KGC problem. By encoding textual descriptions into semantic representations before training, the necessary resources are significantly reduced. Furthermore, we introduce a straightforward but effective fusion framework to integrate structural embedding with pre-encoded semantic description, which enhances the model’s prediction performance on 1-N relations. The experimental results demonstrate that our proposed strategy attains state-of-the-art performance on the WN18RR (MRR+5.4% and Hits@1+6.4%) and UMLS datasets. Compared to existing models, we have increased inference speed by 30x and reduced training memory by approximately 60%.
2018
Towards Less Generic Responses in Neural Conversation Models: A Statistical Re-weighting Method
Yahui Liu
|
Wei Bi
|
Jun Gao
|
Xiaojiang Liu
|
Jian Yao
|
Shuming Shi
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Sequence-to-sequence neural generation models have achieved promising performance on short text conversation tasks. However, they tend to generate generic/dull responses, leading to unsatisfying dialogue experience. We observe that in the conversation tasks, each query could have multiple responses, which forms a 1-to-n or m-to-n relationship in the view of the total corpus. The objective function used in standard sequence-to-sequence models will be dominated by loss terms with generic patterns. Inspired by this observation, we introduce a statistical re-weighting method that assigns different weights for the multiple responses of the same query, and trains the common neural generation model with the weights. Experimental results on a large Chinese dialogue corpus show that our method improves the acceptance rate of generated responses compared with several baseline models and significantly reduces the number of generated generic responses.
Search
Co-authors
- Chenyu Qiu 1
- Pengjiang Qian 1
- Chuang Wang 1
- Li Liu 1
- Fang Wei 1
- show all...