IDT: Dual-Task Adversarial Rewriting for Attribute Anonymization

Pedro Faustini, Shakila Mahjabin Tonni, Annabelle McIver, Qiongkai Xu, Mark Dras


Abstract
Natural language processing (NLP) models may leak private information in different ways, including membership inference, reconstruction, or attribute inference attacks. Sensitive information may not be explicit in the text, but hidden in underlying writing characteristics. Methods to protect privacy can involve using representations inside models that are demonstrated not to detect sensitive attributes or—for instance, in cases where users might be at risk from an untrustworthy model, the sort of scenario of interest here—changing the raw text before models can have access to it. The goal is to rewrite text to prevent someone from inferring a sensitive attribute (e.g., the gender of the author, or their location by the writing style) while keeping the text useful for its original intention (e.g., the sentiment of a product review). The few works tackling this have focused on generative techniques. However, these often create extensively different texts from the original ones or face problems such as mode collapse. This article explores a novel adaptation of adversarial attack techniques to manipulate a text to deceive a classifier w.r.t. one task (privacy) while keeping the predictions of another classifier trained for another task (utility) unchanged. We propose IDT, a method that analyses predictions made by auxiliary and interpretable models to identify which tokens are important to change for the privacy task, and which ones should be kept for the utility task. We evaluate different datasets for NLP suitable for different tasks. Automatic and human evaluations show that IDT retains the utility of text, while also outperforming existing methods when deceiving a classifier w.r.t. a privacy task.
Anthology ID:
2025.cl-4.3
Volume:
Computational Linguistics, Volume 51, Issue 4 - December 2025
Month:
December
Year:
2025
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
1151–1189
Language:
URL:
https://aclanthology.org/2025.cl-4.3/
DOI:
10.1162/coli.a.17
Bibkey:
Cite (ACL):
Pedro Faustini, Shakila Mahjabin Tonni, Annabelle McIver, Qiongkai Xu, and Mark Dras. 2025. IDT: Dual-Task Adversarial Rewriting for Attribute Anonymization. Computational Linguistics, 51(4):1151–1189.
Cite (Informal):
IDT: Dual-Task Adversarial Rewriting for Attribute Anonymization (Faustini et al., CL 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.cl-4.3.pdf