Demystifying Verbatim Memorization in Large Language Models

Jing Huang, Diyi Yang, Christopher Potts


Abstract
Large Language Models (LLMs) frequently memorize long sequences verbatim, often with serious legal and privacy implications. Much prior work has studied such verbatim memorization using observational data. To complement such work, we develop a framework to study verbatim memorization in a controlled setting by continuing pre-training from Pythia checkpoints with injected sequences. We find that (1) non-trivial amounts of repetition are necessary for verbatim memorization to happen; (2) later (and presumably better) checkpoints are more likely to verbatim memorize sequences, even for out-of-distribution sequences; (3) the generation of memorized sequences is triggered by distributed model states that encode high-level features and makes important use of general language modeling capabilities. Guided by these insights, we develop stress tests to evaluate unlearning methods and find they often fail to remove the verbatim memorized information, while also degrading the LM. Overall, these findings challenge the hypothesis that verbatim memorization stems from specific model weights or mechanisms. Rather, verbatim memorization is intertwined with the LM’s general capabilities and thus will be very difficult to isolate and suppress without degrading model quality.
Anthology ID:
2024.emnlp-main.598
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10711–10732
Language:
URL:
https://aclanthology.org/2024.emnlp-main.598
DOI:
10.18653/v1/2024.emnlp-main.598
Bibkey:
Cite (ACL):
Jing Huang, Diyi Yang, and Christopher Potts. 2024. Demystifying Verbatim Memorization in Large Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 10711–10732, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Demystifying Verbatim Memorization in Large Language Models (Huang et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.598.pdf