%0 Conference Proceedings %T Interpreting the Robustness of Neural NLP Models to Textual Perturbations %A Zhang, Yunxiang %A Pan, Liangming %A Tan, Samson %A Kan, Min-Yen %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Findings of the Association for Computational Linguistics: ACL 2022 %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F zhang-etal-2022-interpreting %X Modern Natural Language Processing (NLP) models are known to be sensitive to input perturbations and their performance can decrease when applied to real-world, noisy data. However, it is still unclear why models are less robust to some perturbations than others. In this work, we test the hypothesis that the extent to which a model is affected by an unseen textual perturbation (robustness) can be explained by the learnability of the perturbation (defined as how well the model learns to identify the perturbation with a small amount of evidence). We further give a causal justification for the learnability metric. We conduct extensive experiments with four prominent NLP models — TextRNN, BERT, RoBERTa and XLNet — over eight types of textual perturbations on three datasets. We show that a model which is better at identifying a perturbation (higher learnability) becomes worse at ignoring such a perturbation at test time (lower robustness), providing empirical support for our hypothesis. %R 10.18653/v1/2022.findings-acl.315 %U https://aclanthology.org/2022.findings-acl.315 %U https://doi.org/10.18653/v1/2022.findings-acl.315 %P 3993-4007