Keita Kurita
2020
Weight Poisoning Attacks on Pretrained Models
Keita Kurita
|
Paul Michel
|
Graham Neubig
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recently, NLP has seen a surge in the usage of large pre-trained models. Users download weights of models pre-trained on large datasets, then fine-tune the weights on a task of their choice. This raises the question of whether downloading untrusted pre-trained weights can pose a security threat. In this paper, we show that it is possible to construct “weight poisoning” attacks where pre-trained weights are injected with vulnerabilities that expose “backdoors” after fine-tuning, enabling the attacker to manipulate the model prediction simply by injecting an arbitrary keyword. We show that by applying a regularization method which we call RIPPLe and an initialization procedure we call Embedding Surgery, such attacks are possible even with limited knowledge of the dataset and fine-tuning procedure. Our experiments on sentiment classification, toxicity detection, and spam detection show that this attack is widely applicable and poses a serious threat. Finally, we outline practical defenses against such attacks.
2019
Measuring Bias in Contextualized Word Representations
Keita Kurita
|
Nidhi Vyas
|
Ayush Pareek
|
Alan W Black
|
Yulia Tsvetkov
Proceedings of the First Workshop on Gender Bias in Natural Language Processing
Contextual word embeddings such as BERT have achieved state of the art performance in numerous NLP tasks. Since they are optimized to capture the statistical properties of training data, they tend to pick up on and amplify social stereotypes present in the data as well. In this study, we (1) propose a template-based method to quantify bias in BERT; (2) show that this method obtains more consistent results in capturing social biases than the traditional cosine based method; and (3) conduct a case study, evaluating gender bias in a downstream task of Gender Pronoun Resolution. Although our case study focuses on gender bias, the proposed technique is generalizable to unveiling other biases, including in multiclass settings, such as racial and religious biases.
Search
Co-authors
- Paul Michel 1
- Graham Neubig 1
- Nidhi Vyas 1
- Ayush Pareek 1
- Alan W. Black 1
- show all...