RB-LoRA: Rank-Balanced Aggregation for Low-Rank Adaptation with Federated Fine-Tuning

Sihyeon Ha, Yongjeong Oh, Yo-Seb Jeon


Abstract
Federated fine-tuning of foundation models is impeded by the need to communicate billions of parameters. Low-rank adaptation (LoRA) alleviates this by updating only compact adapter matrices. However, varying client device capabilities lead to different adapter ranks, causing rank heterogeneity that undermines aggregation, and existing reconciliation methods still incur bias or inefficiency. To address this challenge, we propose RB-LoRA, a principled rank-balanced aggregation framework that decomposes each update into rank-wise components and aligns them using analytically derived weights. Experiments on both language and vision models demonstrate consistent improvements under one and three rounds of communication in federated learning.
Anthology ID:
2026.findings-eacl.88
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1737–1746
Language:
URL:
https://aclanthology.org/2026.findings-eacl.88/
DOI:
Bibkey:
Cite (ACL):
Sihyeon Ha, Yongjeong Oh, and Yo-Seb Jeon. 2026. RB-LoRA: Rank-Balanced Aggregation for Low-Rank Adaptation with Federated Fine-Tuning. In Findings of the Association for Computational Linguistics: EACL 2026, pages 1737–1746, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
RB-LoRA: Rank-Balanced Aggregation for Low-Rank Adaptation with Federated Fine-Tuning (Ha et al., Findings 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.findings-eacl.88.pdf
Checklist:
 2026.findings-eacl.88.checklist.pdf