A Framework for Adapting Pre-Trained Language Models to Knowledge Graph Completion

Justin Lovelace, Carolyn Rosé


Abstract
Recent work has demonstrated that entity representations can be extracted from pre-trained language models to develop knowledge graph completion models that are more robust to the naturally occurring sparsity found in knowledge graphs. In this work, we conduct a comprehensive exploration of how to best extract and incorporate those embeddings into knowledge graph completion models. We explore the suitability of the extracted embeddings for direct use in entity ranking and introduce both unsupervised and supervised processing methods that can lead to improved downstream performance. We then introduce supervised embedding extraction methods that can extract more informative representations. We then synthesize our findings and develop a knowledge graph completion model that significantly outperforms recent neural models.
Anthology ID:
2022.emnlp-main.398
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5937–5955
Language:
URL:
https://aclanthology.org/2022.emnlp-main.398
DOI:
10.18653/v1/2022.emnlp-main.398
Bibkey:
Cite (ACL):
Justin Lovelace and Carolyn Rosé. 2022. A Framework for Adapting Pre-Trained Language Models to Knowledge Graph Completion. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5937–5955, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
A Framework for Adapting Pre-Trained Language Models to Knowledge Graph Completion (Lovelace & Rosé, EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.398.pdf