Get Confused Cautiously: Textual Sequence Memorization Erasure with Selective Entropy Maximization

Zhaohan Zhang, Ziquan Liu, Ioannis Patras


Abstract
Large Language Models (LLMs) have been found to memorize and recite some of the textual sequences from their training set verbatim, raising broad concerns about privacy and copyright issues. This Textual Sequence Memorization (TSM) phenomenon leads to a high demand to regulate LLM output to prevent generating certain memorized text that a user wants to be forgotten. However, our empirical study reveals that existing methods for TSM erasure fail to unlearn large numbers of memorized samples without substantially jeopardizing the model utility. To achieve a better trade-off between the effectiveness of TSM erasure and model utility in LLMs, our paper proposes a new method, named Entropy Maximization with Selective Optimization (EMSO), where the model parameters are updated sparsely based on novel optimization and selection criteria, in a manner that does not require additional models or data other than that in the forget set. More specifically, we propose an entropy-based loss that is shown to lead to more stable optimization and better preserves model utility than existing methods. In addition, we propose a contrastive gradient metric that takes both the gradient magnitude and direction into consideration, so as to localize model parameters to update in a sparse model updating scehme. Extensive experiments across three model scales demonstrate that our method excels in handling large-scale forgetting requests while preserving model ability in language generation and understanding.
Anthology ID:
2025.coling-main.726
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10924–10939
Language:
URL:
https://aclanthology.org/2025.coling-main.726/
DOI:
Bibkey:
Cite (ACL):
Zhaohan Zhang, Ziquan Liu, and Ioannis Patras. 2025. Get Confused Cautiously: Textual Sequence Memorization Erasure with Selective Entropy Maximization. In Proceedings of the 31st International Conference on Computational Linguistics, pages 10924–10939, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Get Confused Cautiously: Textual Sequence Memorization Erasure with Selective Entropy Maximization (Zhang et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.726.pdf