Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making

Xuanjie Fang, Sijie Cheng, Yang Liu, Wei Wang


Abstract
Pre-trained language models (PLMs) have been widely used to underpin various downstream tasks. However, the adversarial attack task has found that PLMs are vulnerable to small perturbations. Mainstream methods adopt a detached two-stage framework to attack without considering the subsequent influence of substitution at each step. In this paper, we formally model the adversarial attack task on PLMs as a sequential decision-making problem, where the whole attack process is sequential with two decision-making problems, i.e., word finder and word substitution. Considering the attack process can only receive the final state without any direct intermediate signals, we propose to use reinforcement learning to find an appropriate sequential attack path to generate adversaries, named SDM-ATTACK. Our experimental results show that SDM-ATTACK achieves the highest attack success rate with a comparable modification rate and semantic similarity to attack fine-tuned BERT. Furthermore, our analyses demonstrate the generalization and transferability of SDM-ATTACK.Resources of this work will be released after this paper’s publication.
Anthology ID:
2023.findings-acl.461
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7322–7336
Language:
URL:
https://aclanthology.org/2023.findings-acl.461
DOI:
10.18653/v1/2023.findings-acl.461
Bibkey:
Cite (ACL):
Xuanjie Fang, Sijie Cheng, Yang Liu, and Wei Wang. 2023. Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7322–7336, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making (Fang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.461.pdf