%0 Conference Proceedings %T Multi-Task Learning for Knowledge Graph Completion with Pre-trained Language Models %A Kim, Bosung %A Hong, Taesuk %A Ko, Youngjoong %A Seo, Jungyun %Y Scott, Donia %Y Bel, Nuria %Y Zong, Chengqing %S Proceedings of the 28th International Conference on Computational Linguistics %D 2020 %8 December %I International Committee on Computational Linguistics %C Barcelona, Spain (Online) %F kim-etal-2020-multi %X As research on utilizing human knowledge in natural language processing has attracted considerable attention in recent years, knowledge graph (KG) completion has come into the spotlight. Recently, a new knowledge graph completion method using a pre-trained language model, such as KG-BERT, is presented and showed high performance. However, its scores in ranking metrics such as Hits@k are still behind state-of-the-art models. We claim that there are two main reasons: 1) failure in sufficiently learning relational information in knowledge graphs, and 2) difficulty in picking out the correct answer from lexically similar candidates. In this paper, we propose an effective multi-task learning method to overcome the limitations of previous works. By combining relation prediction and relevance ranking tasks with our target link prediction, the proposed model can learn more relational properties in KGs and properly perform even when lexical similarity occurs. Experimental results show that we not only largely improve the ranking performances compared to KG-BERT but also achieve the state-of-the-art performances in Mean Rank and Hits@10 on the WN18RR dataset. %R 10.18653/v1/2020.coling-main.153 %U https://aclanthology.org/2020.coling-main.153 %U https://doi.org/10.18653/v1/2020.coling-main.153 %P 1737-1743