Peijian Gu
2023
IAEval: A Comprehensive Evaluation of Instance Attribution on Natural Language Understanding
Peijian Gu
|
Yaozong Shen
|
Lijie Wang
|
Quan Wang
|
Hua Wu
|
Zhendong Mao
Findings of the Association for Computational Linguistics: EMNLP 2023
Instance attribution (IA) aims to identify the training instances leading to the prediction of a test example, helping researchers understand the dataset better and optimize data processing. While many IA methods have been proposed recently, how to evaluate them still remains open. Previous evaluations of IA only focus on one or two dimensions and are not comprehensive. In this work, we introduce IAEval for IA methods, a systematic and comprehensive evaluation scheme covering four significant requirements: sufficiency, completeness, stability and plausibility. We elaborately design novel metrics to measure these requirements for the first time. Three representative IA methods are evaluated under IAEval on four natural language understanding datasets. Extensive experiments confirmed the effectiveness of IAEval and exhibited its ability to provide comprehensive comparison among IA methods. With IAEval, researchers can choose the most suitable IA methods for applications like model debugging.
Search