NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries?

Saadia Gabriel, Hamid Palangi, Yejin Choi


Abstract
While a substantial body of prior work has explored adversarial example generation for natural language understanding tasks, these examples are often unrealistic and diverge from the real-world data distributions. In this work, we introduce a two-stage adversarial example generation framework (NaturalAdversaries), for designing adversaries that are effective at fooling a given classifier and demonstrate natural-looking failure cases that could plausibly occur during in-the-wild deployment of the models. At the first stage a token attribution method is used to summarize a given classifier’s behavior as a function of the key tokens in the input. In the second stage a generative model is conditioned on the key tokens from the first stage. NaturalAdversaries is adaptable to both black-box and white-box adversarial attacks based on the level of access to the model parameters. Our results indicate these adversaries generalize across domains, and offer insights for future research on improving robustness of neural text classification models.
Anthology ID:
2022.findings-emnlp.413
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5635–5645
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.413
DOI:
10.18653/v1/2022.findings-emnlp.413
Bibkey:
Cite (ACL):
Saadia Gabriel, Hamid Palangi, and Yejin Choi. 2022. NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries?. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5635–5645, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
NaturalAdversaries: Can Naturalistic Adversaries Be as Effective as Artificial Adversaries? (Gabriel et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.413.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.413.mp4