Finding Memo: Extractive Memorization in Constrained Sequence Generation Tasks

Vikas Raunak, Arul Menezes


Abstract
Memorization presents a challenge for several constrained Natural Language Generation (NLG) tasks such as Neural Machine Translation (NMT), wherein the proclivity of neural models to memorize noisy and atypical samples reacts adversely with the noisy (web crawled) datasets. However, previous studies of memorization in constrained NLG tasks have only focused on counterfactual memorization, linking it to the problem of hallucinations. In this work, we propose a new, inexpensive algorithm for extractive memorization (exact training data generation under insufficient context) in constrained sequence generation tasks and use it to study extractive memorization and its effects in NMT. We demonstrate that extractive memorization poses a serious threat to NMT reliability by qualitatively and quantitatively characterizing the memorized samples as well as the model behavior in their vicinity. Based on empirical observations, we develop a simple algorithm which elicits non-memorized translations of memorized samples from the same model, for a large fraction of such samples. Finally, we show that the proposed algorithm could also be leveraged to mitigate memorization in the model through finetuning. We have released the code to reproduce our results at https://github.com/vyraun/Finding-Memo.
Anthology ID:
2022.findings-emnlp.378
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5153–5162
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.378
DOI:
10.18653/v1/2022.findings-emnlp.378
Bibkey:
Cite (ACL):
Vikas Raunak and Arul Menezes. 2022. Finding Memo: Extractive Memorization in Constrained Sequence Generation Tasks. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5153–5162, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Finding Memo: Extractive Memorization in Constrained Sequence Generation Tasks (Raunak & Menezes, Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.378.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.378.mp4