Unveiling Entity-Level Unlearning for Large Language Models: A Comprehensive Analysis

Weitao Ma, Xiaocheng Feng, Weihong Zhong, Lei Huang, Yangfan Ye, Xiachong Feng, Bing Qin


Abstract
Large language model unlearning has garnered increasing attention due to its potential to address security and privacy concerns, leading to extensive research in the field. However, existing studies have predominantly focused on instance-level unlearning, specifically targeting the removal of predefined instances containing sensitive content. This focus has left a gap in the exploration of removing an entire entity, which is critical in real-world scenarios such as copyright protection. To close this gap, we propose a novel task named Entity-level unlearning, which aims to erase entity-related knowledge from the target model completely. To investigate this task, we systematically evaluate popular unlearning algorithms, revealing that current methods struggle to achieve effective entity-level unlearning. Then, we further explore the factors that influence the performance of unlearning algorithms, identifying that the knowledge coverage of the forget set and its size play pivotal roles. Notably, our analysis also uncovers that entities introduced through fine-tuning are more vulnerable than pre-trained entities during unlearning. We hope these findings can inspire future improvements in entity-level unlearning for LLMs.
Anthology ID:
2025.coling-main.358
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5345–5363
Language:
URL:
https://aclanthology.org/2025.coling-main.358/
DOI:
Bibkey:
Cite (ACL):
Weitao Ma, Xiaocheng Feng, Weihong Zhong, Lei Huang, Yangfan Ye, Xiachong Feng, and Bing Qin. 2025. Unveiling Entity-Level Unlearning for Large Language Models: A Comprehensive Analysis. In Proceedings of the 31st International Conference on Computational Linguistics, pages 5345–5363, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Unveiling Entity-Level Unlearning for Large Language Models: A Comprehensive Analysis (Ma et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.358.pdf