Deep Affix Features Improve Neural Named Entity Recognizers

Vikas Yadav, Rebecca Sharp, Steven Bethard


Abstract
We propose a practical model for named entity recognition (NER) that combines word and character-level information with a specific learned representation of the prefixes and suffixes of the word. We apply this approach to multilingual and multi-domain NER and show that it achieves state of the art results on the CoNLL 2002 Spanish and Dutch and CoNLL 2003 German NER datasets, consistently achieving 1.5-2.3 percent over the state of the art without relying on any dictionary features. Additionally, we show improvement on SemEval 2013 task 9.1 DrugNER, achieving state of the art results on the MedLine dataset and the second best results overall (-1.3% from state of the art). We also establish a new benchmark on the I2B2 2010 Clinical NER dataset with 84.70 F-score.
Anthology ID:
S18-2021
Volume:
Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Malvina Nissim, Jonathan Berant, Alessandro Lenci
Venue:
*SEM
SIGs:
SIGLEX | SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
167–172
Language:
URL:
https://aclanthology.org/S18-2021
DOI:
10.18653/v1/S18-2021
Bibkey:
Cite (ACL):
Vikas Yadav, Rebecca Sharp, and Steven Bethard. 2018. Deep Affix Features Improve Neural Named Entity Recognizers. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 167–172, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Deep Affix Features Improve Neural Named Entity Recognizers (Yadav et al., *SEM 2018)
Copy Citation:
PDF:
https://aclanthology.org/S18-2021.pdf
Code
 vikas95/Pref_Suff_Span_NN
Data
CoNLL 2002CoNLL 2003