Don’t Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text

Ashim Gupta, Carter Blum, Temma Choji, Yingjie Fei, Shalin Shah, Alakananda Vempala, Vivek Srikumar


Abstract
Can language models transform inputs to protect text classifiers against adversarial attacks? In this work, we present ATINTER, a model that intercepts and learns to rewrite adversarial inputs to make them non-adversarial for a downstream text classifier. Our experiments on four datasets and five attack mechanisms reveal that ATINTER is effective at providing better adversarial robustness than existing defense approaches, without compromising task accuracy. For example, on sentiment classification using the SST-2 dataset, our method improves the adversarial accuracy over the best existing defense approach by more than 4% with a smaller decrease in task accuracy (0.5 % vs 2.5%). Moreover, we show that ATINTER generalizes across multiple downstream tasks and classifiers without having to explicitly retrain it for those settings. For example, we find that when ATINTER is trained to remove adversarial perturbations for the sentiment classification task on the SST-2 dataset, it even transfers to a semantically different task of news classification (on AGNews) and improves the adversarial robustness by more than 10%.
Anthology ID:
2023.acl-long.781
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13981–13998
Language:
URL:
https://aclanthology.org/2023.acl-long.781
DOI:
10.18653/v1/2023.acl-long.781
Bibkey:
Cite (ACL):
Ashim Gupta, Carter Blum, Temma Choji, Yingjie Fei, Shalin Shah, Alakananda Vempala, and Vivek Srikumar. 2023. Don’t Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13981–13998, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Don’t Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text (Gupta et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.781.pdf