Towards Improving Selective Prediction Ability of NLP Systems

Neeraj Varshney, Swaroop Mishra, Chitta Baral


Abstract
It’s better to say “I can’t answer” than to answer incorrectly. This selective prediction ability is crucial for NLP systems to be reliably deployed in real-world applications. Prior work has shown that existing selective prediction techniques fail to perform well, especially in the out-of-domain setting. In this work, we propose a method that improves probability estimates of models by calibrating them using prediction confidence and difficulty score of instances. Using these two signals, we first annotate held-out instances and then train a calibrator to predict the likelihood of correctness of the model’s prediction. We instantiate our method with Natural Language Inference (NLI) and Duplicate Detection (DD) tasks and evaluate it in both In-Domain (IID) and Out-of-Domain (OOD) settings. In (IID, OOD) settings, we show that the representations learned by our calibrator result in an improvement of (15.81%, 5.64%) and (6.19%, 13.9%) over ‘MaxProb’ -a selective prediction baseline- on NLI and DD tasks respectively.
Anthology ID:
2022.repl4nlp-1.23
Volume:
Proceedings of the 7th Workshop on Representation Learning for NLP
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venues:
ACL | RepL4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
221–226
Language:
URL:
https://aclanthology.org/2022.repl4nlp-1.23
DOI:
10.18653/v1/2022.repl4nlp-1.23
Bibkey:
Cite (ACL):
Neeraj Varshney, Swaroop Mishra, and Chitta Baral. 2022. Towards Improving Selective Prediction Ability of NLP Systems. In Proceedings of the 7th Workshop on Representation Learning for NLP, pages 221–226, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Towards Improving Selective Prediction Ability of NLP Systems (Varshney et al., RepL4NLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.repl4nlp-1.23.pdf
Data
MRPCMultiNLISNLI