Human-Inspired Obfuscation for Model Unlearning: Local and Global Strategies with Hyperbolic Representations

Zekun Wang, Jingjie Zeng, Yingxu Li, Liang Yang, Hongfei Lin


Abstract
Large language models (LLMs) achieve remarkable performance across various domains, largely due to training on massive datasets. However, this also raises growing concerns over the exposure of sensitive and private information, making model unlearning increasingly critical.However, existing methods often struggle to balance effective forgetting with maintaining model utility. In this work, we propose HyperUnlearn, a human-inspired unlearning framework. We construct two types of fuzzy data—local and global—to simulate forgetting, and represent them in hyperbolic and Euclidean spaces, respectively. Unlearning is performed on a model with frozen early layers to isolate forgetting and preserve useful knowledge.Experiments demonstrate that HyperUnlearn effectively forgets sensitive content while maintaining the model’s language understanding, fluency, and benchmark performance, offering a practical trade-off between forgetting and capability preservation.
Anthology ID:
2025.findings-emnlp.774
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14354–14366
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.774/
DOI:
Bibkey:
Cite (ACL):
Zekun Wang, Jingjie Zeng, Yingxu Li, Liang Yang, and Hongfei Lin. 2025. Human-Inspired Obfuscation for Model Unlearning: Local and Global Strategies with Hyperbolic Representations. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 14354–14366, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Human-Inspired Obfuscation for Model Unlearning: Local and Global Strategies with Hyperbolic Representations (Wang et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.774.pdf
Checklist:
 2025.findings-emnlp.774.checklist.pdf