2024
pdf
bib
abs
Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models
Yae Jee Cho
|
Luyang Liu
|
Zheng Xu
|
Aldi Fahrezi
|
Gauri Joshi
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Foundation models (FMs) adapt surprisingly well to downstream tasks with fine-tuning. However, their colossal parameter space prohibits their training on resource-constrained edge-devices. For federated fine-tuning, we need to consider the smaller FMs of few billion parameters at most, namely on-device FMs (ODFMs), which can be deployed on-device. Federated fine-tuning of ODFMs has unique challenges non-present in standard fine-tuning: i) ODFMs poorly generalize to downstream tasks due to their limited sizes making proper fine-tuning imperative to their performance, and ii) devices have limited and heterogeneous system capabilities and data that can deter the performance of fine-tuning.Tackling these challenges, we propose HetLoRA, a feasible and effective federated fine-tuning method for ODFMs that leverages the system and data heterogeneity at the edge. HetLoRA allows heterogeneous LoRA ranks across clients for their individual system resources, and efficiently aggregates and distributes these LoRA modules in a data-aware manner by applying rank self-pruning locally and sparsity-weighted aggregation at the server. It combines the advantages of high and low-rank LoRAs, achieving improved convergence speed and final performance compared to homogeneous LoRA. Furthermore, HetLoRA has enhanced computation and communication efficiency compared to full fine-tuning making it more feasible for the edge.
pdf
bib
abs
User Inference Attacks on Large Language Models
Nikhil Kandpal
|
Krishna Pillutla
|
Alina Oprea
|
Peter Kairouz
|
Christopher A. Choquette-Choo
|
Zheng Xu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Text written by humans makes up the vast majority of the data used to pre-train and fine-tune large language models (LLMs). Many sources of this data—like code, forum posts, personal websites, and books—are easily attributed to one or a few “users”. In this paper, we ask if it is possible to infer if any of a _user’s_ data was used to train an LLM. Not only would this constitute a breach of privacy, but it would also enable users to detect when their data was used for training. We develop the first effective attacks for _user inference_—at times, with near-perfect success—against LLMs. Our attacks are easy to employ, requiring only black-box access to an LLM and a few samples from the user, which _need not be the ones that were trained on_. We find, both theoretically and empirically, that certain properties make users more susceptible to user inference: being an outlier, having highly correlated examples, and contributing a larger fraction of data. Based on these findings, we identify several methods for mitigating user inference including training with example-level differential privacy, removing within-user duplicate examples, and reducing a user’s contribution to the training data. Though these provide partial mitigation, our work highlights the need to develop methods to fully protect LLMs from user inference.
pdf
bib
abs
A Hassle-free Algorithm for Strong Differential Privacy in Federated Learning Systems
Hugh Brendan McMahan
|
Zheng Xu
|
Yanxiang Zhang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Differential privacy (DP) and federated learning (FL) are combined as advanced privacy-preserving methods when training on-device language models in production mobile keyboard applications. DP-Follow-the-Regularized-Leader (DP-FTRL) algorithms, leveraging correlated noise mechanisms such as tree aggregation or matrix factorization, are widely used in practice for their superior privacy-utility trade-off and compatibility with FL systems. This paper presents a novel variant of DP-FTRL by adapting the recent theoretical advancements of the Buffered Linear Toeplitz (BLT) mechanism to multi-participant scenarios. In the FL setting, our BLT mechanism demonstrates enhanced privacy-utility trade-off and improved memory efficiency than the widely used tree aggregation mechanism. Moreover, BLT achieves comparable privacy and utility to the state-of-the-art banded matrix factorization mechanism, while significantly simplifying usage requirements and reducing memory. The flexibility of the BLT mechanism allows seamless integration with existing DP FL implementations in production environments. We evaluate the BLT-DP-FTRL algorithm on the StackOverflow dataset, serving as a research simulation benchmark, and across four on-device language model tasks in a production FL system. Our empirical results highlight the potential of the BLT mechanism to elevate the practicality and effectiveness of DP in real-world scenarios.
pdf
bib
abs
Can Public Large Language Models Help Private Cross-device Federated Learning?
Boxin Wang
|
Yibo Zhang
|
Yuan Cao
|
Bo Li
|
Hugh McMahan
|
Sewoong Oh
|
Zheng Xu
|
Manzil Zaheer
Findings of the Association for Computational Linguistics: NAACL 2024
We study (differentially) private federated learning (FL) of language models. The language models in cross-device FL are relatively small, which can be trained with meaningful formal user-level differential privacy (DP) guarantees when massive parallelism in training is enabled by the participation of a moderate size of users. Recently, public data has been used to improve privacy-utility trade-offs for both large and small language models. In this work, we provide a systematic study of using large-scale public data and LLMs to help differentially private training of on-device FL models, and further improve the privacy-utility tradeoff by techniques of distillation. Moreover, we propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution, which significantly improves the sample efficiency of (pre-)training on public data. The proposed method is efficient and effective for training private models by taking advantage of public data, especially for customized on-device architectures that do not have ready-touse pre-trained models.
2023
pdf
bib
abs
Federated Learning of Gboard Language Models with Differential Privacy
Zheng Xu
|
Yanxiang Zhang
|
Galen Andrew
|
Christopher Choquette
|
Peter Kairouz
|
Brendan Mcmahan
|
Jesse Rosenstock
|
Yuanbo Zhang
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)
We train and deploy language models (LMs) with federated learning (FL) and differential privacy (DP) in Google Keyboard (Gboard). The recent DP-Follow the Regularized Leader (DP-FTRL) algorithm is applied to achieve meaningfully formal DP guarantees without requiring uniform sampling of clients. To provide favorable privacy-utility trade-offs, we introduce a new client participation criterion and discuss the implication of its configuration in large scale systems. We show how quantile-based clip estimation can be combined with DP-FTRL to adaptively choose the clip norm during training or reduce the hyperparameter tuning in preparation of training. With the help of pretraining on public data, we trained and deployed more than fifteen Gboard LMs that achieve high utility and $\rho-$zCDP privacy guarantees with $\rho \in (0.3, 2)$, with one model additionally trained with secure aggregation. We summarize our experience and provide concrete suggestions on DP training for practitioners.