Hao Wang
Stevens Institute of Technology
Other people with similar names: Hao Wang (Beijing Institute of Technology), Hao Wang (UESTC), Hao Wang (Nanjing), Hao Wang (University of Science and Technology of China), Hao Wang (HKUST), Hao Wang (Zhejiang), Hao Wang (Monash)
Unverified author pages with similar names: Hao Wang
2025
pFedGPT: Hierarchically Optimizing LoRA Aggregation Weights for Personalized Federated GPT Models
Zhanming Shen | Tianqi Xu | Hao Wang | Jian Li | Miao Pan
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Zhanming Shen | Tianqi Xu | Hao Wang | Jian Li | Miao Pan
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Federated finetuning of Large Language Models (LLMs) using Low-Rank Adaptation (LoRA) offers computational efficiency and preserves data privacy. However, applying LoRA in federated settings faces significant challenges: standard approaches struggle with data heterogeneity, and existing personalization techniques fail to precisely adapt shared global knowledge to individual client needs. To address these issues, we propose pFedGPT, a framework that leverages Hierarchical Bayesian Optimization (HBO) for fine-grained, personalized LoRA aggregation. pFedGPT intelligently partitions LoRA parameters based on model structure and client information, then employs HBO to hierarchically search for optimal, module-specific weights. This enables a nuanced integration of the downloaded global LoRA state with each client’s local model, precisely capturing client-specific requirements. To manage the optimization cost inherent in HBO, pFedGPT incorporates efficient multi-fidelity evaluations and a curriculum learning strategy. Extensive experiments demonstrate that pFedGPT achieves state-of-the-art (SOTA) performance on personalized FL benchmarks, showcasing robustness and scalability while introducing only minimal (approx. 4%) additional optimization overhead. Our results also underscore the limitations of traditional FL methods for LoRA-based LLM personalization, highlighting the need for tailored approaches like pFedGPT.