Jiping Zhang
2022
Weight Perturbation as Defense against Adversarial Word Substitutions
Jianhan Xu
|
Linyang Li
|
Jiping Zhang
|
Xiaoqing Zheng
|
Kai-Wei Chang
|
Cho-Jui Hsieh
|
Xuanjing Huang
Findings of the Association for Computational Linguistics: EMNLP 2022
The existence and pervasiveness of textual adversarial examples have raised serious concerns to security-critical applications. Many methods have been developed to defend against adversarial attacks for neural natural language processing (NLP) models.Adversarial training is one of the most successful defense methods by adding some random or intentional perturbations to the original input texts and making the models robust to the perturbed examples.In this study, we explore the feasibility of improving the adversarial robustness of NLP models by performing perturbations in the parameter space rather than the input feature space.The weight perturbation helps to find a better solution (i.e., the values of weights) that minimizes the adversarial loss among other feasible solutions.We found that the weight perturbation can significantly improve the robustness of NLP models when it is combined with the perturbation in the input embedding space, yielding the highest accuracy on both clean and adversarial examples across different datasets.
Search
Co-authors
- Jianhan Xu 1
- Linyang Li 1
- Xiaoqing Zheng 1
- Kai-Wei Chang 1
- Cho-Jui Hsieh 1
- show all...