On the Transferability of Minimal Prediction Preserving Inputs in Question Answering

Shayne Longpre, Yi Lu, Chris DuBois


Abstract
Recent work (Feng et al., 2018) establishes the presence of short, uninterpretable input fragments that yield high confidence and accuracy in neural models. We refer to these as Minimal Prediction Preserving Inputs (MPPIs). In the context of question answering, we investigate competing hypotheses for the existence of MPPIs, including poor posterior calibration of neural models, lack of pretraining, and “dataset bias” (where a model learns to attend to spurious, non-generalizable cues in the training data). We discover a perplexing invariance of MPPIs to random training seed, model architecture, pretraining, and training domain. MPPIs demonstrate remarkable transferability across domains achieving significantly higher performance than comparably short queries. Additionally, penalizing over-confidence on MPPIs fails to improve either generalization or adversarial robustness. These results suggest the interpretability of MPPIs is insufficient to characterize generalization capacity of these models. We hope this focused investigation encourages more systematic analysis of model behavior outside of the human interpretable distribution of examples.
Anthology ID:
2021.naacl-main.101
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1288–1300
Language:
URL:
https://aclanthology.org/2021.naacl-main.101
DOI:
10.18653/v1/2021.naacl-main.101
Bibkey:
Cite (ACL):
Shayne Longpre, Yi Lu, and Chris DuBois. 2021. On the Transferability of Minimal Prediction Preserving Inputs in Question Answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1288–1300, Online. Association for Computational Linguistics.
Cite (Informal):
On the Transferability of Minimal Prediction Preserving Inputs in Question Answering (Longpre et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.101.pdf
Video:
 https://aclanthology.org/2021.naacl-main.101.mp4
Data
HotpotQANewsQASQuADSearchQATriviaQA