ActPerFL: Active Personalized Federated Learning

Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, Tao Zhang


Abstract
In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned. Inspired by Bayesian hierarchical models, we develop ActPerFL, a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients’ training. Such a balance is derived from the inter-client and intra-client uncertainty quantification. Consequently, ActPerFL can adapt to the underlying clients’ heterogeneity with uncertainty-driven local training and model aggregation. With experimental studies on Sent140 and Amazon Alexa audio data, we show that ActPerFL can achieve superior personalization performance compared with the existing counterparts.
Anthology ID:
2022.fl4nlp-1.1
Volume:
Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Bill Yuchen Lin, Chaoyang He, Chulin Xie, Fatemehsadat Mireshghallah, Ninareh Mehrabi, Tian Li, Mahdi Soltanolkotabi, Xiang Ren
Venue:
FL4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–5
Language:
URL:
https://aclanthology.org/2022.fl4nlp-1.1
DOI:
10.18653/v1/2022.fl4nlp-1.1
Bibkey:
Cite (ACL):
Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, and Tao Zhang. 2022. ActPerFL: Active Personalized Federated Learning. In Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022), pages 1–5, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
ActPerFL: Active Personalized Federated Learning (Chen et al., FL4NLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.fl4nlp-1.1.pdf