Guangqian Yang
2024
Knowledge Context Modeling with Pre-trained Language Models for Contrastive Knowledge Graph Completion
Guangqian Yang
|
Yi Liu
|
Lei Zhang
|
Licheng Zhang
|
Hongtao Xie
|
Zhendong Mao
Findings of the Association for Computational Linguistics: ACL 2024
Text-based knowledge graph completion (KGC) methods utilize pre-trained language models for triple encoding and further fine-tune the model to achieve completion. Despite their excellent performance, they neglect the knowledge context in inferring process. Intuitively, knowledge contexts, which refer to the neighboring triples around the target triples, are important information for triple inferring, since they provide additional detailed information about the entities. To this end, we propose a novel framework named KnowC, which models the knowledge context as additional prompts with pre-trained language models for knowledge graph completion. Given the substantial number of neighbors typically associated with entities, along with the constrained input token capacity of language models, we further devise several strategies to sample the neighbors. We conduct extensive experiments on common datasets FB15k-237, WN18RR and Wikidata5M, experiments show that KnowC achieves state-of-the-art performance.
Search