Universal Adversarial Attacks with Natural Triggers for Text Classification

Liwei Song, Xinwei Yu, Hsuan-Tung Peng, Karthik Narasimhan


Abstract
Recent work has demonstrated the vulnerability of modern text classifiers to universal adversarial attacks, which are input-agnostic sequences of words added to text processed by classifiers. Despite being successful, the word sequences produced in such attacks are often ungrammatical and can be easily distinguished from natural text. We develop adversarial attacks that appear closer to natural English phrases and yet confuse classification systems when added to benign inputs. We leverage an adversarially regularized autoencoder (ARAE) to generate triggers and propose a gradient-based search that aims to maximize the downstream classifier’s prediction loss. Our attacks effectively reduce model accuracy on classification tasks while being less identifiable than prior models as per automatic detection metrics and human-subject studies. Our aim is to demonstrate that adversarial attacks can be made harder to detect than previously thought and to enable the development of appropriate defenses.
Anthology ID:
2021.naacl-main.291
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3724–3733
Language:
URL:
https://aclanthology.org/2021.naacl-main.291
DOI:
10.18653/v1/2021.naacl-main.291
Bibkey:
Cite (ACL):
Liwei Song, Xinwei Yu, Hsuan-Tung Peng, and Karthik Narasimhan. 2021. Universal Adversarial Attacks with Natural Triggers for Text Classification. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3724–3733, Online. Association for Computational Linguistics.
Cite (Informal):
Universal Adversarial Attacks with Natural Triggers for Text Classification (Song et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.291.pdf
Video:
 https://aclanthology.org/2021.naacl-main.291.mp4
Code
 Hsuan-Tung/universal_attack_natural_trigger
Data
SNLISSTSST-5