Knowledge Graph Embedding Compression

Mrinmaya Sachan


Abstract
Knowledge graph (KG) representation learning techniques that learn continuous embeddings of entities and relations in the KG have become popular in many AI applications. With a large KG, the embeddings consume a large amount of storage and memory. This is problematic and prohibits the deployment of these techniques in many real world settings. Thus, we propose an approach that compresses the KG embedding layer by representing each entity in the KG as a vector of discrete codes and then composes the embeddings from these codes. The approach can be trained end-to-end with simple modifications to any existing KG embedding technique. We evaluate the approach on various standard KG embedding evaluations and show that it achieves 50-1000x compression of embeddings with a minor loss in performance. The compressed embeddings also retain the ability to perform various reasoning tasks such as KG inference.
Anthology ID:
2020.acl-main.238
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2681–2691
Language:
URL:
https://aclanthology.org/2020.acl-main.238
DOI:
10.18653/v1/2020.acl-main.238
Bibkey:
Cite (ACL):
Mrinmaya Sachan. 2020. Knowledge Graph Embedding Compression. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2681–2691, Online. Association for Computational Linguistics.
Cite (Informal):
Knowledge Graph Embedding Compression (Sachan, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.238.pdf
Video:
 http://slideslive.com/38928878