NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models

Kai Mei, Zheng Li, Zhenting Wang, Yang Zhang, Shiqing Ma


Abstract
Prompt-based learning is vulnerable to backdoor attacks. Existing backdoor attacks against prompt-based models consider injecting backdoors into the entire embedding layers or word embedding vectors. Such attacks can be easily affected by retraining on downstream tasks and with different prompting strategies, limiting the transferability of backdoor attacks. In this work, we propose transferable backdoor attacks against prompt-based models, called NOTABLE, which is independent of downstream tasks and prompting strategies. Specifically, NOTABLE injects backdoors into the encoders of PLMs by utilizing an adaptive verbalizer to bind triggers to specific words (i.e., anchors). It activates the backdoor by pasting input with triggers to reach adversary-desired anchors, achieving independence from downstream tasks and prompting strategies. We conduct experiments on six NLP tasks, three popular models, and three prompting strategies. Empirical results show that NOTABLE achieves superior attack performance (i.e., attack success rate over 90% on all the datasets), and outperforms two state-of-the-art baselines. Evaluations on three defenses show the robustness of NOTABLE. Our code can be found at https://github.com/RU-System-Software-and-Security/Notable.
Anthology ID:
2023.acl-long.867
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15551–15565
Language:
URL:
https://aclanthology.org/2023.acl-long.867
DOI:
10.18653/v1/2023.acl-long.867
Bibkey:
Cite (ACL):
Kai Mei, Zheng Li, Zhenting Wang, Yang Zhang, and Shiqing Ma. 2023. NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15551–15565, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models (Mei et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.867.pdf
Video:
 https://aclanthology.org/2023.acl-long.867.mp4