Understanding Unintended Memorization in Language Models Under Federated Learning

Om Dipakbhai Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Francoise Beaufays


Abstract
Recent works have shown that language models (LMs), e.g., for next word prediction (NWP), have a tendency to memorize rare or unique sequences in the training data. Since useful LMs are often trained on sensitive data, it is critical to identify and mitigate such unintended memorization. Federated Learning (FL) has emerged as a novel framework for large-scale distributed learning tasks. It differs in many aspects from the well-studied central learning setting where all the data is stored at the central server, and minibatch stochastic gradient descent is used to conduct training. This work is motivated by our observation that NWP models trained under FL exhibited remarkably less propensity to such memorization compared to the central learning setting. Thus, we initiate a formal study to understand the effect of different components of FL on unintended memorization in trained NWP models. Our results show that several differing components of FL play an important role in reducing unintended memorization. First, we discover that the clustering of data according to users—which happens by design in FL—has the most significant effect in reducing such memorization. Using the Federated Averaging optimizer with larger effective minibatch sizes for training causes a further reduction. We also demonstrate that training in FL with a user-level differential privacy guarantee results in models that can provide high utility while being resilient to memorizing out-of-distribution phrases with thousands of insertions across over a hundred users in the training set.
Anthology ID:
2021.privatenlp-1.1
Volume:
Proceedings of the Third Workshop on Privacy in Natural Language Processing
Month:
June
Year:
2021
Address:
Online
Editors:
Oluwaseyi Feyisetan, Sepideh Ghanavati, Shervin Malmasi, Patricia Thaine
Venue:
PrivateNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–10
Language:
URL:
https://aclanthology.org/2021.privatenlp-1.1
DOI:
10.18653/v1/2021.privatenlp-1.1
Bibkey:
Cite (ACL):
Om Dipakbhai Thakkar, Swaroop Ramaswamy, Rajiv Mathews, and Francoise Beaufays. 2021. Understanding Unintended Memorization in Language Models Under Federated Learning. In Proceedings of the Third Workshop on Privacy in Natural Language Processing, pages 1–10, Online. Association for Computational Linguistics.
Cite (Informal):
Understanding Unintended Memorization in Language Models Under Federated Learning (Thakkar et al., PrivateNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.privatenlp-1.1.pdf