Don’t forget private retrieval: distributed private similarity search for large language models

Guy Zyskind, Tobin South, Alex Pentland


Abstract
While the flexible capabilities of large language models (LLMs) allow them to answer a range of queries based on existing learned knowledge, information retrieval to augment generation is an important tool to allow LLMs to answer questions on information not included in pre-training data. Such private information is increasingly being generated in a wide array of distributed contexts by organizations and individuals. Performing such information retrieval using neural embeddings of queries and documents always leaked information about queries and database content unless both were stored locally. We present Private Retrieval Augmented Generation (PRAG), an approach that uses multi-party computation (MPC) to securely transmit queries to a distributed set of servers containing a privately constructed database to return top-k and approximate top-k documents. This is a first-of-its-kind approach to dense information retrieval that ensures no server observes a client’s query or can see the database content. The approach introduces a novel MPC friendly protocol for inverted file approximate search (IVF) that allows for fast document search over distributed and private data in sublinear communication complexity. This work presents new avenues through which data for use in LLMs can be accessed and used without needing to centralize or forgo privacy.
Anthology ID:
2024.privatenlp-1.2
Volume:
Proceedings of the Fifth Workshop on Privacy in Natural Language Processing
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Ivan Habernal, Sepideh Ghanavati, Abhilasha Ravichander, Vijayanta Jain, Patricia Thaine, Timour Igamberdiev, Niloofar Mireshghallah, Oluwaseyi Feyisetan
Venues:
PrivateNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7–19
Language:
URL:
https://aclanthology.org/2024.privatenlp-1.2
DOI:
Bibkey:
Cite (ACL):
Guy Zyskind, Tobin South, and Alex Pentland. 2024. Don’t forget private retrieval: distributed private similarity search for large language models. In Proceedings of the Fifth Workshop on Privacy in Natural Language Processing, pages 7–19, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Don’t forget private retrieval: distributed private similarity search for large language models (Zyskind et al., PrivateNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.privatenlp-1.2.pdf