CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation

Dustin Arendt, Zhuanyi Shaw, Prasha Shrestha, Ellyn Ayton, Maria Glenski, Svitlana Volkova


Abstract
Evaluation beyond aggregate performance metrics, e.g. F1-score, is crucial to both establish an appropriate level of trust in machine learning models and identify avenues for future model improvements. In this paper we demonstrate CrossCheck, an interactive capability for rapid cross-model comparison and reproducible error analysis. We describe the tool, discuss design and implementation details, and present three NLP use cases – named entity recognition, reading comprehension, and clickbait detection that show the benefits of using the tool for model evaluation. CrossCheck enables users to make informed decisions when choosing between multiple models, identify when the models are correct and for which examples, investigate whether the models are making the same mistakes as humans, evaluate models’ generalizability and highlight models’ limitations, strengths and weaknesses. Furthermore, CrossCheck is implemented as a Jupyter widget, which allows for rapid and convenient integration into existing model development workflows.
Anthology ID:
2021.dash-1.13
Volume:
Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances
Month:
June
Year:
2021
Address:
Online
Editors:
Eduard Dragut, Yunyao Li, Lucian Popa, Slobodan Vucetic
Venue:
DaSH
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
79–85
Language:
URL:
https://aclanthology.org/2021.dash-1.13
DOI:
10.18653/v1/2021.dash-1.13
Bibkey:
Cite (ACL):
Dustin Arendt, Zhuanyi Shaw, Prasha Shrestha, Ellyn Ayton, Maria Glenski, and Svitlana Volkova. 2021. CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation. In Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances, pages 79–85, Online. Association for Computational Linguistics.
Cite (Informal):
CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation (Arendt et al., DaSH 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.dash-1.13.pdf
Code
 pnnl/crosscheck
Data
SQuAD