Improving Relation Extraction with Knowledge-attention

Pengfei Li, Kezhi Mao, Xuefeng Yang, Qi Li


Abstract
While attention mechanisms have been proven to be effective in many NLP tasks, majority of them are data-driven. We propose a novel knowledge-attention encoder which incorporates prior knowledge from external lexical resources into deep neural networks for relation extraction task. Furthermore, we present three effective ways of integrating knowledge-attention with self-attention to maximize the utilization of both knowledge and data. The proposed relation extraction system is end-to-end and fully attention-based. Experiment results show that the proposed knowledge-attention mechanism has complementary strengths with self-attention, and our integrated models outperform existing CNN, RNN, and self-attention based models. State-of-the-art performance is achieved on TACRED, a complex and large-scale relation extraction dataset.
Anthology ID:
D19-1022
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Kentaro Inui, Jing Jiang, Vincent Ng, Xiaojun Wan
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
229–239
Language:
URL:
https://aclanthology.org/D19-1022
DOI:
10.18653/v1/D19-1022
Bibkey:
Cite (ACL):
Pengfei Li, Kezhi Mao, Xuefeng Yang, and Qi Li. 2019. Improving Relation Extraction with Knowledge-attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 229–239, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Improving Relation Extraction with Knowledge-attention (Li et al., EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-1022.pdf
Attachment:
 D19-1022.Attachment.pdf
Data
SemEval-2010 Task-8