Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment Analysis Models

Abigail Swenor, Jugal Kalita


Abstract
Attacks on deep learning models are often difficult to identify and therefore are difficult to protect against. This problem is exacerbated by the use of public datasets that typically are not manually inspected before use. In this paper, we offer a solution to this vulnerability by using, during testing, random perturbations such as spelling correction if necessary, substitution by random synonym, or simply drop-ping the word. These perturbations are applied to random words in random sentences to defend NLP models against adversarial attacks. Our Random Perturbations Defense andIncreased Randomness Defense methods are successful in returning attacked models to similar accuracy of models before attacks. The original accuracy of the model used in this work is 80% for sentiment classification. After undergoing attacks, the accuracy drops to an accuracy between 0% and 44%. After applying our defense methods, the accuracy of the model is returned to the original accuracy within statistical significance.
Anthology ID:
2021.icon-main.63
Volume:
Proceedings of the 18th International Conference on Natural Language Processing (ICON)
Month:
December
Year:
2021
Address:
National Institute of Technology Silchar, Silchar, India
Editors:
Sivaji Bandyopadhyay, Sobha Lalitha Devi, Pushpak Bhattacharyya
Venue:
ICON
SIG:
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
519–528
Language:
URL:
https://aclanthology.org/2021.icon-main.63
DOI:
Bibkey:
Cite (ACL):
Abigail Swenor and Jugal Kalita. 2021. Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment Analysis Models. In Proceedings of the 18th International Conference on Natural Language Processing (ICON), pages 519–528, National Institute of Technology Silchar, Silchar, India. NLP Association of India (NLPAI).
Cite (Informal):
Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment Analysis Models (Swenor & Kalita, ICON 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.icon-main.63.pdf
Optional supplementary material:
 2021.icon-main.63.OptionalSupplementaryMaterial.zip
Data
IMDb Movie Reviews