Learning to Refuse: Towards Mitigating Privacy Risks in LLMs

Zhenhua Liu, Tong Zhu, Chuanyuan Tan, Wenliang Chen


Abstract
Large language models (LLMs) exhibit remarkable capabilities in understanding and generating natural language. However, these models can inadvertently memorize private information, posing significant privacy risks. This study addresses the challenge of enabling LLMs to protect specific individuals’ private data without the need for complete retraining. We propose RETURN, a Real-world pErsonal daTa UnleaRNing dataset, comprising 2,492 individuals from Wikipedia with associated QA pairs, to evaluate machine unlearning (MU) methods for protecting personal data in a realistic scenario. Additionally, we introduce the Name-Aware Unlearning Framework (NAUF) for Privacy Protection, which enables the model to learn which individuals’ information should be protected without affecting its ability to answer questions related to other unrelated individuals. Our extensive experiments demonstrate that NAUF achieves a state-of-the-art average unlearning score, surpassing the best baseline method by 5.65 points, effectively protecting target individuals’ personal data while maintaining the model’s general capabilities.
Anthology ID:
2025.coling-main.114
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1683–1698
Language:
URL:
https://aclanthology.org/2025.coling-main.114/
DOI:
Bibkey:
Cite (ACL):
Zhenhua Liu, Tong Zhu, Chuanyuan Tan, and Wenliang Chen. 2025. Learning to Refuse: Towards Mitigating Privacy Risks in LLMs. In Proceedings of the 31st International Conference on Computational Linguistics, pages 1683–1698, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Learning to Refuse: Towards Mitigating Privacy Risks in LLMs (Liu et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.114.pdf