SeqScore: Addressing Barriers to Reproducible Named Entity Recognition Evaluation

Chester Palen-Michel, Nolan Holley, Constantine Lignos


Abstract
To address a looming crisis of unreproducible evaluation for named entity recognition, we propose guidelines and introduce SeqScore, a software package to improve reproducibility. The guidelines we propose are extremely simple and center around transparency regarding how chunks are encoded and scored. We demonstrate that despite the apparent simplicity of NER evaluation, unreported differences in the scoring procedure can result in changes to scores that are both of noticeable magnitude and statistically significant. We describe SeqScore, which addresses many of the issues that cause replication failures.
Anthology ID:
2021.eval4nlp-1.5
Volume:
Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Yang Gao, Steffen Eger, Wei Zhao, Piyawat Lertvittayakumjorn, Marina Fomicheva
Venue:
Eval4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
40–50
Language:
URL:
https://aclanthology.org/2021.eval4nlp-1.5
DOI:
10.18653/v1/2021.eval4nlp-1.5
Bibkey:
Cite (ACL):
Chester Palen-Michel, Nolan Holley, and Constantine Lignos. 2021. SeqScore: Addressing Barriers to Reproducible Named Entity Recognition Evaluation. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems, pages 40–50, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
SeqScore: Addressing Barriers to Reproducible Named Entity Recognition Evaluation (Palen-Michel et al., Eval4NLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eval4nlp-1.5.pdf
Code
 bltlab/seqscore
Data
CoNLL 2003MasakhaNER