Probabilistic Extension of Precision, Recall, and F1 Score for More Thorough Evaluation of Classification Models

Reda Yacouby, Dustin Axman


Abstract
In pursuit of the perfect supervised NLP classifier, razor thin margins and low-resource test sets can make modeling decisions difficult. Popular metrics such as Accuracy, Precision, and Recall are often insufficient as they fail to give a complete picture of the model’s behavior. We present a probabilistic extension of Precision, Recall, and F1 score, which we refer to as confidence-Precision (cPrecision), confidence-Recall (cRecall), and confidence-F1 (cF1) respectively. The proposed metrics address some of the challenges faced when evaluating large-scale NLP systems, specifically when the model’s confidence score assignments have an impact on the system’s behavior. We describe four key benefits of our proposed metrics as compared to their threshold-based counterparts. Two of these benefits, which we refer to as robustness to missing values and sensitivity to model confidence score assignments are self-evident from the metrics’ definitions; the remaining benefits, generalization, and functional consistency are demonstrated empirically.
Anthology ID:
2020.eval4nlp-1.9
Volume:
Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems
Month:
November
Year:
2020
Address:
Online
Editors:
Steffen Eger, Yang Gao, Maxime Peyrard, Wei Zhao, Eduard Hovy
Venue:
Eval4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
79–91
Language:
URL:
https://aclanthology.org/2020.eval4nlp-1.9
DOI:
10.18653/v1/2020.eval4nlp-1.9
Bibkey:
Cite (ACL):
Reda Yacouby and Dustin Axman. 2020. Probabilistic Extension of Precision, Recall, and F1 Score for More Thorough Evaluation of Classification Models. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 79–91, Online. Association for Computational Linguistics.
Cite (Informal):
Probabilistic Extension of Precision, Recall, and F1 Score for More Thorough Evaluation of Classification Models (Yacouby & Axman, Eval4NLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.eval4nlp-1.9.pdf
Video:
 https://slideslive.com/38939710
Data
SNLI