Amy Hemmeter
2020
Constrained Decoding for Computationally Efficient Named Entity Recognition Taggers
Brian Lester
|
Daniel Pressel
|
Amy Hemmeter
|
Sagnik Ray Choudhury
|
Srinivas Bangalore
Findings of the Association for Computational Linguistics: EMNLP 2020
Current state-of-the-art models for named entity recognition (NER) are neural models with a conditional random field (CRF) as the final layer. Entities are represented as per-token labels with a special structure in order to decode them into spans. Current work eschews prior knowledge of how the span encoding scheme works and relies on the CRF learning which transitions are illegal and which are not to facilitate global coherence. We find that by constraining the output to suppress illegal transitions we can train a tagger with a cross-entropy loss twice as fast as a CRF with differences in F1 that are statistically insignificant, effectively eliminating the need for a CRF. We analyze the dynamics of tag co-occurrence to explain when these constraints are most effective and provide open source implementations of our tagger in both PyTorch and TensorFlow.
Search