Type-Sensitive Knowledge Base Inference Without Explicit Type Supervision

Prachi Jain, Pankaj Kumar, Mausam, Soumen Chakrabarti


Abstract
State-of-the-art knowledge base completion (KBC) models predict a score for every known or unknown fact via a latent factorization over entity and relation embeddings. We observe that when they fail, they often make entity predictions that are incompatible with the type required by the relation. In response, we enhance each base factorization with two type-compatibility terms between entity-relation pairs, and combine the signals in a novel manner. Without explicit supervision from a type catalog, our proposed modification obtains up to 7% MRR gains over base models, and new state-of-the-art results on several datasets. Further analysis reveals that our models better represent the latent types of entities and their embeddings also predict supervised types better than the embeddings fitted by baseline models.
Anthology ID:
P18-2013
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
75–80
Language:
URL:
https://aclanthology.org/P18-2013
DOI:
10.18653/v1/P18-2013
Bibkey:
Cite (ACL):
Prachi Jain, Pankaj Kumar, Mausam, and Soumen Chakrabarti. 2018. Type-Sensitive Knowledge Base Inference Without Explicit Type Supervision. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 75–80, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Type-Sensitive Knowledge Base Inference Without Explicit Type Supervision (Jain et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-2013.pdf
Poster:
 P18-2013.Poster.pdf
Code
 dair-iitd/kbi
Data
FB15k-237