Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models

Zhiyuan Zhang, Xiaoqian Liu, Yi Zhang, Qi Su, Xu Sun, Bin He


Abstract
Conventional knowledge graph embedding (KGE) often suffers from limited knowledge representation, leading to performance degradation especially on the low-resource problem. To remedy this, we propose to enrich knowledge representation via pretrained language models by leveraging world knowledge from pretrained models. Specifically, we present a universal training framework named Pretrain-KGE consisting of three phases: semantic-based fine-tuning phase, knowledge extracting phase and KGE training phase. Extensive experiments show that our proposed Pretrain-KGE can improve results over KGE models, especially on solving the low-resource problem.
Anthology ID:
2020.findings-emnlp.25
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
259–266
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.25
DOI:
10.18653/v1/2020.findings-emnlp.25
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.25.pdf