Privacy Evaluation Benchmarks for NLP Models

Wei Huang, Yinggui Wang, Cen Chen


Abstract
By inducing privacy attacks on NLP models, attackers can obtain sensitive information such as training data and model parameters, etc. Although researchers have studied, in-depth, several kinds of attacks in NLP models, they are non-systematic analyses. It lacks a comprehensive understanding of the impact caused by the attacks. For example, we must consider which scenarios can apply to which attacks, what the common factors are that affect the performance of different attacks, the nature of the relationships between different attacks, and the influence of various datasets and models on the effectiveness of the attacks, etc. Therefore, we need a benchmark to holistically assess the privacy risks faced by NLP models. In this paper, we present a privacy attack and defense evaluation benchmark in the field of NLP, which includes the conventional/small models and large language models (LLMs). This benchmark supports a variety of models, datasets, and protocols, along with standardized modules for comprehensive evaluation of attacks and defense strategies. Based on the above framework, we present a study on the association between auxiliary data from different domains and the strength of privacy attacks. And we provide an improved attack method in this scenario with the help of Knowledge Distillation (KD). Furthermore, we propose a chained framework for privacy attacks. Allowing a practitioner to chain multiple attacks to achieve a higher-level attack objective. Based on this, we provide some defense and enhanced attack strategies. The code for reproducing the results can be found at https://anonymous.4open.science/r/nlp_doctor-AF48
Anthology ID:
2024.findings-emnlp.147
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2615–2636
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.147
DOI:
Bibkey:
Cite (ACL):
Wei Huang, Yinggui Wang, and Cen Chen. 2024. Privacy Evaluation Benchmarks for NLP Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 2615–2636, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Privacy Evaluation Benchmarks for NLP Models (Huang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.147.pdf