Jinghan Yang


2024

pdf bib
Relabeling Minimal Training Subset to Flip a Prediction
Jinghan Yang | Linjie Xu | Lequan Yu
Findings of the Association for Computational Linguistics: EACL 2024

When facing an unsatisfactory prediction from a machine learning model, users can be interested in investigating the underlying reasons and exploring the potential for reversing the outcome. We ask: To flip the prediction on a test point xt, how to identify the smallest training subset 𝒮t that we need to relabel?We propose an efficient algorithm to identify and relabel such a subset via an extended influence function for binary classification models with convex loss.We find that relabeling fewer than 2% of the training points can always flip a prediction.This mechanism can serve multiple purposes: (1) providing an approach to challenge a model prediction by altering training points; (2) evaluating model robustness with the cardinality of the subset (i.e., |𝒮t|); we show that |𝒮t| is highly related to the noise ratio in the training set and |𝒮t| is correlated with but complementary to predicted probabilities; and (3) revealing training points lead to group attribution bias. To the best of our knowledge, we are the first to investigate identifying and relabeling the minimal training subset required to flip a given prediction.

2023

pdf bib
How Many and Which Training Points Would Need to be Removed to Flip this Prediction?
Jinghan Yang | Sarthak Jain | Byron C. Wallace
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

We consider the problem of identifying a minimal subset of training data 𝒮t such that if the instances comprising 𝒮t had been removed prior to training, the categorization of a given test point xt would have been different.Identifying such a set may be of interest for a few reasons.First, the cardinality of 𝒮t provides a measure of robustness (if |𝒮t| is small for xt, we might be less confident in the corresponding prediction), which we show is correlated with but complementary to predicted probabilities.Second, interrogation of 𝒮t may provide a novel mechanism for contesting a particular model prediction: If one can make the case that the points in 𝒮t are wrongly labeled or irrelevant, this may argue for overturning the associated prediction. Identifying 𝒮t via brute-force is intractable.We propose comparatively fast approximation methods to find 𝒮t based on influence functions, and find that—for simple convex text classification models—these approaches can often successfully identify relatively small sets of training examples which, if removed, would flip the prediction.