Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution

Fanchao Qi, Yuan Yao, Sophia Xu, Zhiyuan Liu, Maosong Sun


Abstract
Recent studies show that neural natural language processing (NLP) models are vulnerable to backdoor attacks. Injected with backdoors, models perform normally on benign examples but produce attacker-specified predictions when the backdoor is activated, presenting serious security threats to real-world applications. Since existing textual backdoor attacks pay little attention to the invisibility of backdoors, they can be easily detected and blocked. In this work, we present invisible backdoors that are activated by a learnable combination of word substitution. We show that NLP models can be injected with backdoors that lead to a nearly 100% attack success rate, whereas being highly invisible to existing defense strategies and even human inspections. The results raise a serious alarm to the security of NLP models, which requires further research to be resolved. All the data and code of this paper are released at https://github.com/thunlp/BkdAtk-LWS.
Anthology ID:
2021.acl-long.377
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
August
Year:
2021
Address:
Online
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4873–4883
Language:
URL:
https://aclanthology.org/2021.acl-long.377
DOI:
10.18653/v1/2021.acl-long.377
Bibkey:
Cite (ACL):
Fanchao Qi, Yuan Yao, Sophia Xu, Zhiyuan Liu, and Maosong Sun. 2021. Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4873–4883, Online. Association for Computational Linguistics.
Cite (Informal):
Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution (Qi et al., ACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.acl-long.377.pdf
Video:
 https://aclanthology.org/2021.acl-long.377.mp4
Code
 thunlp/BkdAtk-LWS
Data
OLIDSST