Differentially Private Language Models Benefit from Public Pre-training

Gavin Kerrigan, Dylan Slack, Jens Tuyls


Abstract
Language modeling is a keystone task in natural language processing. When training a language model on sensitive information, differential privacy (DP) allows us to quantify the degree to which our private data is protected. However, training algorithms which enforce differential privacy often lead to degradation in model quality. We study the feasibility of learning a language model which is simultaneously high-quality and privacy preserving by tuning a public base model on a private corpus. We find that DP fine-tuning boosts the performance of language models in the private domain, making the training of such models possible.
Anthology ID:
2020.privatenlp-1.5
Volume:
Proceedings of the Second Workshop on Privacy in NLP
Month:
November
Year:
2020
Address:
Online
Editors:
Oluwaseyi Feyisetan, Sepideh Ghanavati, Shervin Malmasi, Patricia Thaine
Venue:
PrivateNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
39–45
Language:
URL:
https://aclanthology.org/2020.privatenlp-1.5
DOI:
10.18653/v1/2020.privatenlp-1.5
Bibkey:
Cite (ACL):
Gavin Kerrigan, Dylan Slack, and Jens Tuyls. 2020. Differentially Private Language Models Benefit from Public Pre-training. In Proceedings of the Second Workshop on Privacy in NLP, pages 39–45, Online. Association for Computational Linguistics.
Cite (Informal):
Differentially Private Language Models Benefit from Public Pre-training (Kerrigan et al., PrivateNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.privatenlp-1.5.pdf
Video:
 https://slideslive.com/38939774