Adaptive Differential Privacy for Language Model Training

Xinwei Wu, Li Gong, Deyi Xiong


Abstract
Although differential privacy (DP) can protect language models from leaking privacy, its indiscriminative protection on all data points reduces its practical utility. Previous works improve DP training by discriminating privacy and non-privacy data. But these works rely on datasets with prior privacy information, which is not available in real-world scenarios. In this paper, we propose an Adaptive Differential Privacy (ADP) framework for language modeling without resorting to prior privacy information. We estimate the probability that a linguistic item contains privacy based on a language model. We further propose a new Adam algorithm that adjusts the degree of differential privacy noise injected to the language model according to the estimated privacy probabilities. Experiments demonstrate that our ADP improves differentially private language modeling to achieve good protection from canary attackers.
Anthology ID:
2022.fl4nlp-1.3
Volume:
Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Bill Yuchen Lin, Chaoyang He, Chulin Xie, Fatemehsadat Mireshghallah, Ninareh Mehrabi, Tian Li, Mahdi Soltanolkotabi, Xiang Ren
Venue:
FL4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21–26
Language:
URL:
https://aclanthology.org/2022.fl4nlp-1.3
DOI:
10.18653/v1/2022.fl4nlp-1.3
Bibkey:
Cite (ACL):
Xinwei Wu, Li Gong, and Deyi Xiong. 2022. Adaptive Differential Privacy for Language Model Training. In Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022), pages 21–26, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Adaptive Differential Privacy for Language Model Training (Wu et al., FL4NLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.fl4nlp-1.3.pdf
Data
WikiText-103WikiText-2