Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks

Yangyi Chen, Fanchao Qi, Hongcheng Gao, Zhiyuan Liu, Maosong Sun


Abstract
Backdoor attacks are a kind of emergent security threat in deep learning. After being injected with a backdoor, a deep neural model will behave normally on standard inputs but give adversary-specified predictions once the input contains specific backdoor triggers. In this paper, we find two simple tricks that can make existing textual backdoor attacks much more harmful. The first trick is to add an extra training task to distinguish poisoned and clean data during the training of the victim model, and the second one is to use all the clean training data rather than remove the original clean data corresponding to the poisoned data. These two tricks are universally applicable to different attack models. We conduct experiments in three tough situations including clean data fine-tuning, low-poisoning-rate, and label-consistent attacks. Experimental results show that the two tricks can significantly improve attack performance. This paper exhibits the great potential harmfulness of backdoor attacks. All the code and data can be obtained at https://github.com/thunlp/StyleAttack.
Anthology ID:
2022.emnlp-main.770
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11215–11221
Language:
URL:
https://aclanthology.org/2022.emnlp-main.770
DOI:
10.18653/v1/2022.emnlp-main.770
Bibkey:
Cite (ACL):
Yangyi Chen, Fanchao Qi, Hongcheng Gao, Zhiyuan Liu, and Maosong Sun. 2022. Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11215–11221, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks (Chen et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.770.pdf