Tianhao Wang
2024
Machine Unlearning of Pre-trained Large Language Models
Jin Yao
|
Eli Chien
|
Minxin Du
|
Xinyao Niu
|
Tianhao Wang
|
Zezhou Cheng
|
Xiang Yue
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
This study investigates the concept of the ‘right to be forgotten’ within the context of large language models (LLMs). We explore machine unlearning as a pivotal solution, with a focus on pre-trained models–a notably under-researched area. Our research delineates a comprehensive framework for machine unlearning in pre-trained LLMs, encompassing a critical analysis of seven diverse unlearning methods. Through rigorous evaluation using curated datasets from arXiv, books, and GitHub, we establish a robust benchmark for unlearning performance, demonstrating that these methods are over 105 times more computationally efficient than retraining. Our results show that integrating gradient ascent with gradient descent on in-distribution data improves hyperparameter robustness. We also provide detailed guidelines for efficient hyperparameter tuning in the unlearning process. Our findings advance the discourse on ethical AI practices, offering substantive insights into the mechanics of machine unlearning for pre-trained LLMs and underscoring the potential for responsible AI development.
2022
An Empirical Analysis of Memorization in Fine-tuned Autoregressive Language Models
Fatemehsadat Mireshghallah
|
Archit Uniyal
|
Tianhao Wang
|
David Evans
|
Taylor Berg-Kirkpatrick
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Large language models are shown to present privacy risks through memorization of training data, andseveral recent works have studied such risks for the pre-training phase. Little attention, however, has been given to the fine-tuning phase and it is not well understood how different fine-tuning methods (such as fine-tuning the full model, the model head, and adapter) compare in terms of memorization risk. This presents increasing concern as the “pre-train and fine-tune” paradigm proliferates. In this paper, we empirically study memorization of fine-tuning methods using membership inference and extraction attacks, and show that their susceptibility to attacks is very different. We observe that fine-tuning the head of the model has the highest susceptibility to attacks, whereas fine-tuning smaller adapters appears to be less vulnerable to known extraction attacks.
2021
Differential Privacy for Text Analytics via Natural Text Sanitization
Xiang Yue
|
Minxin Du
|
Tianhao Wang
|
Yaliang Li
|
Huan Sun
|
Sherman S. M. Chow
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Search
Co-authors
- Xiang Yue 2
- Minxin Du 2
- Fatemehsadat Mireshghallah 1
- Archit Uniyal 1
- David K. Evans 1
- show all...