Danny D. Leybzon
2024
Learning, Forgetting, Remembering: Insights From Tracking LLM Memorization During Training
Danny D. Leybzon
|
Corentin Kervadec
Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Large language models memorize portions of their training data verbatim. Our findings indicate that models exhibit higher memorization rates both early on and at the very end of their training, with the lowest rates occurring midway through the process. This phenomenon can be attributed to the models retaining most of the examples memorized early on, while forgetting many more examples as training progresses. Interestingly, these forgotten examples are sometimes re-memorized later on, often undergoing cycles of forgetting and re-memorization. Notably, examples memorized early in training are more likely to remain consistently retained, suggesting that they become more firmly ’crystallized’ in the model’s representation. Based on these insights, we tentatively recommend placing data that is more likely to be sensitive in the middle stages of the training process.
Search