Ananda Suresh
2022
Scaling Language Model Size in Cross-Device Federated Learning
Jae Ro
|
Theresa Breiner
|
Lara McConnaughey
|
Mingqing Chen
|
Ananda Suresh
|
Shankar Kumar
|
Rajiv Mathews
Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)
Most studies in cross-device federated learning focus on small models, due to the server-client communication and on-device computation bottlenecks. In this work, we leverage various techniques for mitigating these bottlenecks to train larger language models in cross-device federated learning. With systematic applications of partial model training, quantization, efficient transfer learning, and communication-efficient optimizers, we are able to train a 21M parameter Transformer that achieves the same perplexity as that of a similarly sized LSTM with ∼10× smaller client-to-server communication cost and 11% lower perplexity than smaller LSTMs commonly studied in literature.
Search
Co-authors
- Jae Ro 1
- Theresa Breiner 1
- Lara McConnaughey 1
- Mingqing Chen 1
- Shankar Kumar 1
- show all...