Certified Robustness to Adversarial Word Substitutions

Robin Jia, Aditi Raghunathan, Kerem Göksel, Percy Liang


Abstract
State-of-the-art NLP models can often be fooled by adversaries that apply seemingly innocuous label-preserving transformations (e.g., paraphrasing) to input text. The number of possible transformations scales exponentially with text length, so data augmentation cannot cover all transformations of an input. This paper considers one exponentially large family of label-preserving transformations, in which every word in the input can be replaced with a similar word. We train the first models that are provably robust to all word substitutions in this family. Our training procedure uses Interval Bound Propagation (IBP) to minimize an upper bound on the worst-case loss that any combination of word substitutions can induce. To evaluate models’ robustness to these transformations, we measure accuracy on adversarially chosen word substitutions applied to test examples. Our IBP-trained models attain 75% adversarial accuracy on both sentiment analysis on IMDB and natural language inference on SNLI; in comparison, on IMDB, models trained normally and ones trained with data augmentation achieve adversarial accuracy of only 12% and 41%, respectively.
Anthology ID:
D19-1423
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4129–4142
Language:
URL:
https://aclanthology.org/D19-1423
DOI:
10.18653/v1/D19-1423
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/D19-1423.pdf
Attachment:
 D19-1423.Attachment.zip
Code
 worksheets/0x79feda5f +  additional community code
Data
IMDb Movie ReviewsSNLI