MoPe: Model Perturbation based Privacy Attacks on Language Models

Marvin Li, Jason Wang, Jeffrey Wang, Seth Neel


Abstract
Recent work has shown that Large Language Models (LLMs) can unintentionally leak sensitive information present in their training data. In this paper, we present Model Perturbations (MoPe), a new method to identify with high confidence if a given text is in the training data of a pre-trained language model, given white-box access to the models parameters. MoPe adds noise to the model in parameter space and measures the drop in log-likelihood at a given point x, a statistic we show approximates the trace of the Hessian matrix with respect to model parameters. Across language models ranging from 70M to 12B parameters, we show that MoPe is more effective than existing loss-based attacks and recently proposed perturbation-based methods. We also examine the role of training point order and model size in attack success, and empirically demonstrate that MoPe accurately approximate the trace of the Hessian in practice. Our results show that the loss of a point alone is insufficient to determine extractability—there are training points we can recover using our method that have average loss. This casts some doubt on prior works that use the loss of a point as evidence of memorization or unlearning.
Anthology ID:
2023.emnlp-main.842
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13647–13660
Language:
URL:
https://aclanthology.org/2023.emnlp-main.842
DOI:
10.18653/v1/2023.emnlp-main.842
Bibkey:
Cite (ACL):
Marvin Li, Jason Wang, Jeffrey Wang, and Seth Neel. 2023. MoPe: Model Perturbation based Privacy Attacks on Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13647–13660, Singapore. Association for Computational Linguistics.
Cite (Informal):
MoPe: Model Perturbation based Privacy Attacks on Language Models (Li et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.842.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.842.mp4