DP-MLM: Differentially Private Text Rewriting Using Masked Language Models

Stephen Meisenbacher, Maulik Chevli, Juraj Vladika, Florian Matthes


Abstract
The task of text privatization using Differential Privacy has recently taken the form of text rewriting, in which an input text is obfuscated via the use of generative (large) language models. While these methods have shown promising results in the ability to preserve privacy, these methods rely on autoregressive models which lack a mechanism to contextualize the private rewriting process. In response to this, we propose DP-MLM, a new method for differentially private text rewriting based on leveraging masked language models (MLMs) to rewrite text in a semantically similar and obfuscated manner. We accomplish this with a simple contextualization technique, whereby we rewrite a text one token at a time. We find that utilizing encoder-only MLMs provides better utility preservation at lower 𝜀 levels, as compared to previous methods relying on larger models with a decoder. In addition, MLMs allow for greater customization of the rewriting mechanism, as opposed to generative approaches. We make the code for DP-MLM public and reusable, found at https://github.com/sjmeis/DPMLM.
Anthology ID:
2024.findings-acl.554
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9314–9328
Language:
URL:
https://aclanthology.org/2024.findings-acl.554
DOI:
Bibkey:
Cite (ACL):
Stephen Meisenbacher, Maulik Chevli, Juraj Vladika, and Florian Matthes. 2024. DP-MLM: Differentially Private Text Rewriting Using Masked Language Models. In Findings of the Association for Computational Linguistics ACL 2024, pages 9314–9328, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
DP-MLM: Differentially Private Text Rewriting Using Masked Language Models (Meisenbacher et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.554.pdf