Privacy Implications of Retrieval-Based Language Models

Yangsibo Huang, Samyak Gupta, Zexuan Zhong, Kai Li, Danqi Chen


Abstract
Retrieval-based language models (LMs) have demonstrated improved interpretability, factuality, and adaptability compared to their parametric counterparts by incorporating retrieved text from external datastores. While it is well known that parametric models are prone to leaking private data, it remains unclear how the addition of a retrieval datastore impacts model privacy. In this work, we present the first study of privacy risks in retrieval-based LMs, particularly kNN-LMs. Our goal is to explore the optimal design and training procedure in domains where privacy is of concern, aiming to strike a balance between utility and privacy. Crucially, we find that kNN-LMs are more susceptible to leaking private information from their private datastore than parametric models. We further explore mitigations of privacy risks: When privacy information is targeted and readily detected in the text, we find that a simple sanitization step would eliminate the risks while decoupling query and key encoders achieves an even better utility-privacy trade-off. Otherwise, we consider strategies of mixing public and private data in both datastore and encoder training. While these methods offer modest improvements, they leave considerable room for future work. Together, our findings provide insights for practitioners to better understand and mitigate privacy risks in retrieval-based LMs.
Anthology ID:
2023.emnlp-main.921
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14887–14902
Language:
URL:
https://aclanthology.org/2023.emnlp-main.921
DOI:
10.18653/v1/2023.emnlp-main.921
Bibkey:
Cite (ACL):
Yangsibo Huang, Samyak Gupta, Zexuan Zhong, Kai Li, and Danqi Chen. 2023. Privacy Implications of Retrieval-Based Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14887–14902, Singapore. Association for Computational Linguistics.
Cite (Informal):
Privacy Implications of Retrieval-Based Language Models (Huang et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.921.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.921.mp4