Learning to Tag OOV Tokens by Integrating Contextual Representation and Background Knowledge

Keqing He, Yuanmeng Yan, Weiran Xu


Abstract
Neural-based context-aware models for slot tagging have achieved state-of-the-art performance. However, the presence of OOV(out-of-vocab) words significantly degrades the performance of neural-based models, especially in a few-shot scenario. In this paper, we propose a novel knowledge-enhanced slot tagging model to integrate contextual representation of input text and the large-scale lexical background knowledge. Besides, we use multi-level graph attention to explicitly model lexical relations. The experiments show that our proposed knowledge integration mechanism achieves consistent improvements across settings with different sizes of training data on two public benchmark datasets.
Anthology ID:
2020.acl-main.58
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
619–624
Language:
URL:
https://aclanthology.org/2020.acl-main.58
DOI:
10.18653/v1/2020.acl-main.58
Bibkey:
Cite (ACL):
Keqing He, Yuanmeng Yan, and Weiran Xu. 2020. Learning to Tag OOV Tokens by Integrating Contextual Representation and Background Knowledge. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 619–624, Online. Association for Computational Linguistics.
Cite (Informal):
Learning to Tag OOV Tokens by Integrating Contextual Representation and Background Knowledge (He et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.58.pdf
Video:
 http://slideslive.com/38928871