Learning, Forgetting, Remembering: Insights From Tracking LLM Memorization During Training

Danny D. Leybzon, Corentin Kervadec


Abstract
Large language models memorize portions of their training data verbatim. Our findings indicate that models exhibit higher memorization rates both early on and at the very end of their training, with the lowest rates occurring midway through the process. This phenomenon can be attributed to the models retaining most of the examples memorized early on, while forgetting many more examples as training progresses. Interestingly, these forgotten examples are sometimes re-memorized later on, often undergoing cycles of forgetting and re-memorization. Notably, examples memorized early in training are more likely to remain consistently retained, suggesting that they become more firmly ’crystallized’ in the model’s representation. Based on these insights, we tentatively recommend placing data that is more likely to be sensitive in the middle stages of the training process.
Anthology ID:
2024.blackboxnlp-1.4
Volume:
Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2024
Address:
Miami, Florida, US
Editors:
Yonatan Belinkov, Najoung Kim, Jaap Jumelet, Hosein Mohebbi, Aaron Mueller, Hanjie Chen
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
43–57
Language:
URL:
https://aclanthology.org/2024.blackboxnlp-1.4
DOI:
Bibkey:
Cite (ACL):
Danny D. Leybzon and Corentin Kervadec. 2024. Learning, Forgetting, Remembering: Insights From Tracking LLM Memorization During Training. In Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 43–57, Miami, Florida, US. Association for Computational Linguistics.
Cite (Informal):
Learning, Forgetting, Remembering: Insights From Tracking LLM Memorization During Training (Leybzon & Kervadec, BlackboxNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.blackboxnlp-1.4.pdf