Learn2Weight: Parameter Adaptation against Similar-domain Adversarial Attacks

Siddhartha Datta


Abstract
Recent work in black-box adversarial attacks for NLP systems has attracted attention. Prior black-box attacks assume that attackers can observe output labels from target models based on selected inputs. In this work, inspired by adversarial transferability, we propose a new type of black-box NLP adversarial attack that an attacker can choose a similar domain and transfer the adversarial examples to the target domain and cause poor performance in target model. Based on domain adaptation theory, we then propose a defensive strategy, called Learn2Weight, which trains to predict the weight adjustments for target model in order to defense the attack of similar-domain adversarial examples. Using Amazon multi-domain sentiment classification dataset, we empirically show that Learn2Weight model is effective against the attack compared to standard black-box defense methods such as adversarial training and defense distillation. This work contributes to the growing literature on machine learning safety.
Anthology ID:
2022.coling-1.427
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
4832–4843
Language:
URL:
https://aclanthology.org/2022.coling-1.427
DOI:
Bibkey:
Cite (ACL):
Siddhartha Datta. 2022. Learn2Weight: Parameter Adaptation against Similar-domain Adversarial Attacks. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4832–4843, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Learn2Weight: Parameter Adaptation against Similar-domain Adversarial Attacks (Datta, COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.427.pdf