The Unreasonable Ineffectiveness of Nucleus Sampling on Mitigating Text Memorization

Luka Borec, Philipp Sadler, David Schlangen


Abstract
This work analyses the text memorization behavior of large language models (LLMs) when subjected to nucleus sampling. Stochastic decoding methods like nucleus sampling are typically applied to overcome issues such as monotonous and repetitive text generation, which are often observed with maximization-based decoding techniques. We hypothesize that nucleus sampling might also reduce the occurrence of memorization patterns, because it could lead to the selection of tokens outside the memorized sequence. To test this hypothesis we create a diagnostic dataset with a known distribution of duplicates that gives us some control over the likelihood of memorisation of certain parts of the training data. Our analysis of two GPT-Neo models fine-tuned on this dataset interestingly shows that (i) an increase of the nucleus size reduces memorization only modestly, and (ii) even when models do not engage in “hard” memorization – a verbatim reproduction of training samples – they may still display “soft” memorization whereby they generate outputs that echo the training data but without a complete one-by-one resemblance.
Anthology ID:
2024.inlg-main.30
Volume:
Proceedings of the 17th International Natural Language Generation Conference
Month:
September
Year:
2024
Address:
Tokyo, Japan
Editors:
Saad Mahamood, Nguyen Le Minh, Daphne Ippolito
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
358–370
Language:
URL:
https://aclanthology.org/2024.inlg-main.30
DOI:
Bibkey:
Cite (ACL):
Luka Borec, Philipp Sadler, and David Schlangen. 2024. The Unreasonable Ineffectiveness of Nucleus Sampling on Mitigating Text Memorization. In Proceedings of the 17th International Natural Language Generation Conference, pages 358–370, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
The Unreasonable Ineffectiveness of Nucleus Sampling on Mitigating Text Memorization (Borec et al., INLG 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.inlg-main.30.pdf