Private Release of Text Embedding Vectors

Oluwaseyi Feyisetan, Shiva Kasiviswanathan


Abstract
Ensuring strong theoretical privacy guarantees on text data is a challenging problem which is usually attained at the expense of utility. However, to improve the practicality of privacy preserving text analyses, it is essential to design algorithms that better optimize this tradeoff. To address this challenge, we propose a release mechanism that takes any (text) embedding vector as input and releases a corresponding private vector. The mechanism satisfies an extension of differential privacy to metric spaces. Our idea based on first randomly projecting the vectors to a lower-dimensional space and then adding noise in this projected space generates private vectors that achieve strong theoretical guarantees on its utility. We support our theoretical proofs with empirical experiments on multiple word embedding models and NLP datasets, achieving in some cases more than 10% gains over the existing state-of-the-art privatization techniques.
Anthology ID:
2021.trustnlp-1.3
Volume:
Proceedings of the First Workshop on Trustworthy Natural Language Processing
Month:
June
Year:
2021
Address:
Online
Venue:
TrustNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15–27
Language:
URL:
https://aclanthology.org/2021.trustnlp-1.3
DOI:
10.18653/v1/2021.trustnlp-1.3
Bibkey:
Cite (ACL):
Oluwaseyi Feyisetan and Shiva Kasiviswanathan. 2021. Private Release of Text Embedding Vectors. In Proceedings of the First Workshop on Trustworthy Natural Language Processing, pages 15–27, Online. Association for Computational Linguistics.
Cite (Informal):
Private Release of Text Embedding Vectors (Feyisetan & Kasiviswanathan, TrustNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.trustnlp-1.3.pdf
Data
MPQA Opinion CorpusSST