Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling

Chris Emmery, Ákos Kádár, Grzegorz Chrupała


Abstract
Written language contains stylistic cues that can be exploited to automatically infer a variety of potentially sensitive author information. Adversarial stylometry intends to attack such models by rewriting an author’s text. Our research proposes several components to facilitate deployment of these adversarial attacks in the wild, where neither data nor target models are accessible. We introduce a transformer-based extension of a lexical replacement attack, and show it achieves high transferability when trained on a weakly labeled corpus—decreasing target model performance below chance. While not completely inconspicuous, our more successful attacks also prove notably less detectable by humans. Our framework therefore provides a promising direction for future privacy-preserving adversarial attacks.
Anthology ID:
2021.eacl-main.203
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2388–2402
Language:
URL:
https://aclanthology.org/2021.eacl-main.203
DOI:
10.18653/v1/2021.eacl-main.203
Bibkey:
Cite (ACL):
Chris Emmery, Ákos Kádár, and Grzegorz Chrupała. 2021. Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2388–2402, Online. Association for Computational Linguistics.
Cite (Informal):
Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling (Emmery et al., EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.203.pdf
Code
 cmry/reap