Huiqi Liu
2024
An Inversion Attack Against Obfuscated Embedding Matrix in Language Model Inference
Yu Lin
|
Qizhi Zhang
|
Quanwei Cai
|
Jue Hong
|
Wu Ye
|
Huiqi Liu
|
Bing Duan
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
With the rapidly-growing deployment of large language model (LLM) inference services, privacy concerns have arisen regarding to the user input data. Recent studies are exploring transforming user inputs to obfuscated embedded vectors, so that the data will not be eavesdropped by service provides. However, in this paper we show that again, without a solid and deliberate security design and analysis, such embedded vector obfuscation failed to protect users’ privacy. We demonstrate the conclusion via conducting a novel inversion attack called Element-wise Differential Nearest Neighbor (EDNN) on the glide-reflection proposed in (CITATION), and the result showed that the original user input text can be 100% recovered from the obfuscated embedded vectors. We further analyze security requirements on embedding obfuscation and present several remedies to our proposed attack.
Search
Co-authors
- Yu Lin 1
- Qizhi Zhang 1
- Quanwei Cai 1
- Jue Hong 1
- Wu Ye 1
- show all...