Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model

Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, Hai Zhao


Abstract
Recently, the problem of robustness of pre-trained language models (PrLMs) has received increasing research interest. Latest studies on adversarial attacks achieve high attack success rates against PrLMs, claiming that PrLMs are not robust. However, we find that the adversarial samples that PrLMs fail are mostly non-natural and do not appear in reality. We question the validity of the current evaluation of robustness of PrLMs based on these non-natural adversarial samples and propose an anomaly detector to evaluate the robustness of PrLMs with more natural adversarial samples. We also investigate two applications of the anomaly detector: (1) In data augmentation, we employ the anomaly detector to force generating augmented data that are distinguished as non-natural, which brings larger gains to the accuracy of PrLMs. (2) We apply the anomaly detector to a defense framework to enhance the robustness of PrLMs. It can be used to defend all types of attacks and achieves higher accuracy on both adversarial samples and compliant samples than other defense frameworks.
Anthology ID:
2022.findings-acl.73
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
905–915
Language:
URL:
https://aclanthology.org/2022.findings-acl.73
DOI:
10.18653/v1/2022.findings-acl.73
Bibkey:
Cite (ACL):
Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, and Hai Zhao. 2022. Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model. In Findings of the Association for Computational Linguistics: ACL 2022, pages 905–915, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model (Wang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.73.pdf
Code
 lilynlp/distinguishing-non-natural
Data
IMDb Movie ReviewsSSTSST-2