Knowledge Base Embedding By Cooperative Knowledge Distillation

Raphaël Sourty, Jose G. Moreno, François-Paul Servant, Lynda Tamine-Lechani


Abstract
Knowledge bases are increasingly exploited as gold standard data sources which benefit various knowledge-driven NLP tasks. In this paper, we explore a new research direction to perform knowledge base (KB) representation learning grounded with the recent theoretical framework of knowledge distillation over neural networks. Given a set of KBs, our proposed approach KD-MKB, learns KB embeddings by mutually and jointly distilling knowledge within a dynamic teacher-student setting. Experimental results on two standard datasets show that knowledge distillation between KBs through entity and relation inference is actually observed. We also show that cooperative learning significantly outperforms the two proposed baselines, namely traditional and sequential distillation.
Anthology ID:
2020.coling-main.489
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5579–5590
Language:
URL:
https://aclanthology.org/2020.coling-main.489
DOI:
10.18653/v1/2020.coling-main.489
Bibkey:
Cite (ACL):
Raphaël Sourty, Jose G. Moreno, François-Paul Servant, and Lynda Tamine-Lechani. 2020. Knowledge Base Embedding By Cooperative Knowledge Distillation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5579–5590, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Knowledge Base Embedding By Cooperative Knowledge Distillation (Sourty et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.489.pdf
Code
 raphaelsty/mkb
Data
FB15kWN18WN18RR