Differentially Private Next-Token Prediction of Large Language Models

James Flemings, Meisam Razaviyayn, Murali Annavaram


Abstract
Ensuring the privacy of Large Language Models (LLMs) is becoming increasingly important. The most widely adopted technique to accomplish this is DP-SGD, which trains a model to guarantee Differential Privacy (DP). However, DP-SGD overestimates an adversary’s capabilities in having white box access to the model and, as a result, causes longer training times and larger memory usage than SGD. On the other hand, commercial LLM deployments are predominantly cloud-based; hence, adversarial access to LLMs is black-box. Motivated by these observations, we present Private Mixing of Ensemble Distributions (PMixED): a private prediction protocol for next-token prediction that utilizes the inherent stochasticity of next-token sampling and a public model to achieve Differential Privacy. We formalize this by introducing RD-mollifers which project each of the model’s output distribution from an ensemble of fine-tuned LLMs onto a set around a public LLM’s output distribution, then average the projected distributions and sample from it. Unlike DP-SGD which needs to consider the model architecture during training, PMixED is model agnostic, which makes PMixED a very appealing solution for current deployments. Our results show that PMixED achieves a stronger privacy guarantee than sample-level privacy and outperforms DP-SGD for privacy 𝜖 = 8 on large-scale datasets. Thus, PMixED offers a practical alternative to DP training methods for achieving strong generative utility without compromising privacy.
Anthology ID:
2024.naacl-long.247
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4390–4404
Language:
URL:
https://aclanthology.org/2024.naacl-long.247
DOI:
10.18653/v1/2024.naacl-long.247
Bibkey:
Cite (ACL):
James Flemings, Meisam Razaviyayn, and Murali Annavaram. 2024. Differentially Private Next-Token Prediction of Large Language Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 4390–4404, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Differentially Private Next-Token Prediction of Large Language Models (Flemings et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.247.pdf