PII-Compass: Guiding LLM training data extraction prompts towards the target PII via grounding

Krishna Nakka, Ahmed Frikha, Ricardo Mendes, Xue Jiang, Xuebing Zhou


Abstract
The latest and most impactful advances in large models stem from their increased size. Unfortunately, this translates into an improved memorization capacity, raising data privacy concerns. Specifically, it has been shown that models can output personal identifiable information (PII) contained in their training data. However, reported PII extraction performance varies widely, and there is no consensus on the optimal methodology to evaluate this risk, resulting in underestimating realistic adversaries. In this work, we empirically demonstrate that it is possible to improve the extractability of PII by over ten-fold by grounding the prefix of the manually constructed extraction prompt with in-domain data. This approach achieves phone number extraction rates of 0.92%, 3.9%, and 6.86% with 1, 128, and 2308 queries, respectively, i.e., the phone number of 1 person in 15 is extractable.
Anthology ID:
2024.privatenlp-1.7
Volume:
Proceedings of the Fifth Workshop on Privacy in Natural Language Processing
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Ivan Habernal, Sepideh Ghanavati, Abhilasha Ravichander, Vijayanta Jain, Patricia Thaine, Timour Igamberdiev, Niloofar Mireshghallah, Oluwaseyi Feyisetan
Venues:
PrivateNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
63–73
Language:
URL:
https://aclanthology.org/2024.privatenlp-1.7
DOI:
Bibkey:
Cite (ACL):
Krishna Nakka, Ahmed Frikha, Ricardo Mendes, Xue Jiang, and Xuebing Zhou. 2024. PII-Compass: Guiding LLM training data extraction prompts towards the target PII via grounding. In Proceedings of the Fifth Workshop on Privacy in Natural Language Processing, pages 63–73, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
PII-Compass: Guiding LLM training data extraction prompts towards the target PII via grounding (Nakka et al., PrivateNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.privatenlp-1.7.pdf