Achieving Model Robustness through Discrete Adversarial Training

Maor Ivgi, Jonathan Berant


Abstract
Discrete adversarial attacks are symbolic perturbations to a language input that preserve the output label but lead to a prediction error. While such attacks have been extensively explored for the purpose of evaluating model robustness, their utility for improving robustness has been limited to offline augmentation only. Concretely, given a trained model, attacks are used to generate perturbed (adversarial) examples, and the model is re-trained exactly once. In this work, we address this gap and leverage discrete attacks for online augmentation, where adversarial examples are generated at every training step, adapting to the changing nature of the model. We propose (i) a new discrete attack, based on best-first search, and (ii) random sampling attacks that unlike prior work are not based on expensive search-based procedures. Surprisingly, we find that random sampling leads to impressive gains in robustness, outperforming the commonly-used offline augmentation, while leading to a speedup at training time of ~10x. Furthermore, online augmentation with search-based attacks justifies the higher training cost, significantly improving robustness on three datasets. Last, we show that our new attack substantially improves robustness compared to prior methods.
Anthology ID:
2021.emnlp-main.115
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1529–1544
Language:
URL:
https://aclanthology.org/2021.emnlp-main.115
DOI:
10.18653/v1/2021.emnlp-main.115
Bibkey:
Cite (ACL):
Maor Ivgi and Jonathan Berant. 2021. Achieving Model Robustness through Discrete Adversarial Training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1529–1544, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Achieving Model Robustness through Discrete Adversarial Training (Ivgi & Berant, EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.115.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.115.mp4
Code
 Mivg/robust_transformers
Data
BoolQIMDb Movie ReviewsSSTSST-2