DSRM: Boost Textual Adversarial Training with Distribution Shift Risk Minimization

SongYang Gao, Shihan Dou, Yan Liu, Xiao Wang, Qi Zhang, Zhongyu Wei, Jin Ma, Ying Shan


Abstract
Adversarial training is one of the best-performing methods in improving the robustness of deep language models. However, robust models come at the cost of high time consumption, as they require multi-step gradient ascents or word substitutions to obtain adversarial samples. In addition, these generated samples are deficient in grammatical quality and semantic consistency, which impairs the effectiveness of adversarial training. To address these problems, we introduce a novel, effective procedure for instead adversarial training with only clean data. Our procedure, distribution shift risk minimization (DSRM), estimates the adversarial loss by perturbing the input data’s probability distribution rather than their embeddings. This formulation results in a robust model that minimizes the expected global loss under adversarial attacks. Our approach requires zero adversarial samples for training and reduces time consumption by up to 70% compared to current best-performing adversarial training methods. Experiments demonstrate that DSRM considerably improves BERT’s resistance to textual adversarial attacks and achieves state-of-the-art robust accuracy on various benchmarks.
Anthology ID:
2023.acl-long.680
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12177–12189
Language:
URL:
https://aclanthology.org/2023.acl-long.680
DOI:
10.18653/v1/2023.acl-long.680
Bibkey:
Cite (ACL):
SongYang Gao, Shihan Dou, Yan Liu, Xiao Wang, Qi Zhang, Zhongyu Wei, Jin Ma, and Ying Shan. 2023. DSRM: Boost Textual Adversarial Training with Distribution Shift Risk Minimization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12177–12189, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
DSRM: Boost Textual Adversarial Training with Distribution Shift Risk Minimization (Gao et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.680.pdf
Video:
 https://aclanthology.org/2023.acl-long.680.mp4