Dylan Slack
2021
On the Lack of Robust Interpretability of Neural Text Classifiers
Muhammad Bilal Zafar
|
Michele Donini
|
Dylan Slack
|
Cedric Archambeau
|
Sanjiv Das
|
Krishnaram Kenthapadi
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2020
Differentially Private Language Models Benefit from Public Pre-training
Gavin Kerrigan
|
Dylan Slack
|
Jens Tuyls
Proceedings of the Second Workshop on Privacy in NLP
Language modeling is a keystone task in natural language processing. When training a language model on sensitive information, differential privacy (DP) allows us to quantify the degree to which our private data is protected. However, training algorithms which enforce differential privacy often lead to degradation in model quality. We study the feasibility of learning a language model which is simultaneously high-quality and privacy preserving by tuning a public base model on a private corpus. We find that DP fine-tuning boosts the performance of language models in the private domain, making the training of such models possible.
Search
Co-authors
- Gavin Kerrigan 1
- Jens Tuyls 1
- Muhammad Bilal Zafar 1
- Michele Donini 1
- Cédric Archambeau 1
- show all...