Federated Learning (FL) has gained significant attention in distributed machine learning by enabling collaborative model training across decentralized system while preserving data privacy. Although extensive research has addressed statistical data heterogeneity, FL still faces several challenges, including high communication and computation overheads and severe device heterogeneity, which require further investigation. Prior work has addressed these issues through sub-model training and partial parameter training. However, such methods often suffer from inconsistent parameter distributions across clients, inaccurate global loss estimation, and increased bias and variance. Guided by our empirical analysis, we propose FedPLT (Federated Learning with Partial Layer Training), an innovative and structured partial parameter training approach that exhibits training behavior similar to full model training while assigning client-specific portions of the model according to their communication and computational capabilities. In addition, we evaluate the performance of FedPLT when combined with optimal client sampling under communication constraints. We show that this integration improves FL performance by reducing sampling variance under the same communication budget. Through extensive experiments, we demonstrate that FedPLT achieves performance comparable to, or even surpassing, that of full-model training (i.e., FedAvg), while requiring significantly fewer trainable parameters per client. Moreover, FedPLT outperforms existing methods in highly heterogeneous environments, effectively adapts to client resource constraints, and reduces the number of straggling clients. In particular, FedPLT reduces the number of trainable parameters by 71%-82% while achieving performance on par with full-model training.
翻译:暂无翻译