Provably Confidential Language Modelling

Xuandong Zhao, Lei Li, Yu-Xiang Wang


Abstract
Large language models are shown to memorize privacy information such as social security numbers in training data. Given the sheer scale of the training corpus, it is challenging to screen and filter these privacy data, either manually or automatically. In this paper, we propose Confidentially Redacted Training (CRT), a method to train language generation models while protecting the confidential segments. We borrow ideas from differential privacy (which solves a related but distinct problem) and show that our method is able to provably prevent unintended memorization by randomizing parts of the training process. Moreover, we show that redaction with an approximately correct screening policy amplifies the confidentiality guarantee. We implement the method for both LSTM and GPT language models. Our experimental results show that the models trained by CRT obtain almost the same perplexity while preserving strong confidentiality.
Anthology ID:
2022.naacl-main.69
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
943–955
Language:
URL:
https://aclanthology.org/2022.naacl-main.69
DOI:
10.18653/v1/2022.naacl-main.69
Bibkey:
Cite (ACL):
Xuandong Zhao, Lei Li, and Yu-Xiang Wang. 2022. Provably Confidential Language Modelling. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 943–955, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Provably Confidential Language Modelling (Zhao et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.69.pdf
Video:
 https://aclanthology.org/2022.naacl-main.69.mp4
Code
 xuandongzhao/crt