Faithful Model Evaluation for Model-Based Metrics

Qian Hu, Palash Goyal, Rahul Gupta


Abstract
Statistical significance testing is used in natural language processing (NLP) to determine whether the results of a study or experiment are likely to be due to chance or if they reflect a genuine relationship. A key step in significance testing is the estimation of confidence interval which is a function of sample variance. Sample variance calculation is straightforward when evaluating against ground truth. However, in many cases, a metric model is often used for evaluation. For example, to compare toxicity of two large language models, a toxicity classifier is used for evaluation. Existing works usually do not consider the variance change due to metric model errors, which can lead to wrong conclusions. In this work, we establish the mathematical foundation of significance testing for model-based metrics. With experiments on public benchmark datasets and a production system, we show that considering metric model errors to calculate sample variances for model-based metrics changes the conclusions in certain experiments.
Anthology ID:
2023.emnlp-main.464
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7484–7489
Language:
URL:
https://aclanthology.org/2023.emnlp-main.464
DOI:
10.18653/v1/2023.emnlp-main.464
Bibkey:
Cite (ACL):
Qian Hu, Palash Goyal, and Rahul Gupta. 2023. Faithful Model Evaluation for Model-Based Metrics. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7484–7489, Singapore. Association for Computational Linguistics.
Cite (Informal):
Faithful Model Evaluation for Model-Based Metrics (Hu et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.464.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.464.mp4