Privacy- and Utility-Preserving NLP with Anonymized data: A case study of Pseudonymization

Oleksandr Yermilov, Vipul Raheja, Artem Chernodub


Abstract
This work investigates the effectiveness of different pseudonymization techniques, ranging from rule-based substitutions to using pre-trained Large Language Models (LLMs), on a variety of datasets and models used for two widely used NLP tasks: text classification and summarization. Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques better to balance the trade-offs between data protection and utility preservation. We make our code, pseudonymized datasets, and downstream models publicly available.
Anthology ID:
2023.trustnlp-1.20
Volume:
Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anaelia Ovalle, Kai-Wei Chang, Ninareh Mehrabi, Yada Pruksachatkun, Aram Galystan, Jwala Dhamala, Apurv Verma, Trista Cao, Anoop Kumar, Rahul Gupta
Venue:
TrustNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
232–241
Language:
URL:
https://aclanthology.org/2023.trustnlp-1.20
DOI:
10.18653/v1/2023.trustnlp-1.20
Bibkey:
Cite (ACL):
Oleksandr Yermilov, Vipul Raheja, and Artem Chernodub. 2023. Privacy- and Utility-Preserving NLP with Anonymized data: A case study of Pseudonymization. In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023), pages 232–241, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Privacy- and Utility-Preserving NLP with Anonymized data: A case study of Pseudonymization (Yermilov et al., TrustNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.trustnlp-1.20.pdf