Lynda Tamine-Lechani
2020
Knowledge Base Embedding By Cooperative Knowledge Distillation
Raphaël Sourty
|
Jose G. Moreno
|
François-Paul Servant
|
Lynda Tamine-Lechani
Proceedings of the 28th International Conference on Computational Linguistics
Knowledge bases are increasingly exploited as gold standard data sources which benefit various knowledge-driven NLP tasks. In this paper, we explore a new research direction to perform knowledge base (KB) representation learning grounded with the recent theoretical framework of knowledge distillation over neural networks. Given a set of KBs, our proposed approach KD-MKB, learns KB embeddings by mutually and jointly distilling knowledge within a dynamic teacher-student setting. Experimental results on two standard datasets show that knowledge distillation between KBs through entity and relation inference is actually observed. We also show that cooperative learning significantly outperforms the two proposed baselines, namely traditional and sequential distillation.
Search