Daeyoung Hong


2025

pdf bib
DP-FROST: Differentially Private Fine-tuning of Pre-trained Models with Freezing Model Parameters
Daeyoung Hong | Woohwan Jung | Kyuseok Shim
Proceedings of the 31st International Conference on Computational Linguistics

Training models with differential privacy has received a lot of attentions since differential privacy provides theoretical guarantee of privacy preservation. For a task in a specific domain, since a large-scale pre-trained model in the same domain contains general knowledge of the task, using such a model requires less effort in designing and training the model. However, differentially privately fine-tuning such models having a large number of trainable parameters results in large degradation of utility. Thus, we propose methods that effectively fine-tune the large-scale pre-trained models with freezing unimportant parameters for downstream tasks while satisfying differential privacy. To select the parameters to be fine-tuned, we propose several efficient methods based on the gradients of model parameters. We show the effectiveness of the proposed method by performing experiments with real datasets.