Towards More Fine-grained and Reliable NLP Performance Prediction

Zihuiwen Ye, Pengfei Liu, Jinlan Fu, Graham Neubig


Abstract
Performance prediction, the task of estimating a system’s performance without performing experiments, allows us to reduce the experimental burden caused by the combinatorial explosion of different datasets, languages, tasks, and models. In this paper, we make two contributions to improving performance prediction for NLP tasks. First, we examine performance predictors not only for holistic measures of accuracy like F1 or BLEU, but also fine-grained performance measures such as accuracy over individual classes of examples. Second, we propose methods to understand the reliability of a performance prediction model from two angles: confidence intervals and calibration. We perform an analysis of four types of NLP tasks, and both demonstrate the feasibility of fine-grained performance prediction and the necessity to perform reliability analysis for performance prediction methods in the future.
Anthology ID:
2021.eacl-main.324
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3703–3714
Language:
URL:
https://aclanthology.org/2021.eacl-main.324
DOI:
10.18653/v1/2021.eacl-main.324
Bibkey:
Cite (ACL):
Zihuiwen Ye, Pengfei Liu, Jinlan Fu, and Graham Neubig. 2021. Towards More Fine-grained and Reliable NLP Performance Prediction. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3703–3714, Online. Association for Computational Linguistics.
Cite (Informal):
Towards More Fine-grained and Reliable NLP Performance Prediction (Ye et al., EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.324.pdf
Code
 neulab/Reliable-NLPPP