Federated learning (FL) enables a set of distributed clients to jointly train machine learning models while preserving their local data privacy, making it attractive for applications in healthcare, finance, mobility, and smart-city systems. However, FL faces several challenges, including statistical heterogeneity and uneven client participation, which can degrade convergence and model quality. In this work, we propose FedPBS, an FL algorithm that couples complementary ideas from FedBS and FedProx to address these challenges. FedPBS dynamically adapts batch sizes to client resources to support balanced and scalable participation, and selectively applies a proximal correction to small-batch clients to stabilize local updates and reduce divergence from the global model. Experiments on benchmarking datasets such as CIFAR-10 and UCI-HAR under highly non-IID settings demonstrate that FedPBS consistently outperforms state-of-the-art methods, including FedBS, FedGA, MOON, and FedProx. The results demonstrate robust performance gains under extreme data heterogeneity, with smooth loss curves indicating stable convergence across diverse federated environments. FedPBS consistently outperforms state-of-the-art federated learning baselines on UCI-HAR and CIFAR-10 under severe non-IID conditions while maintaining stable and reliable convergence.
翻译:联邦学习(FL)使得一组分布式客户端能够协同训练机器学习模型,同时保护其本地数据隐私,这使其在医疗保健、金融、移动性和智慧城市系统等应用中具有吸引力。然而,联邦学习面临若干挑战,包括统计异构性和客户端参与不均衡,这些问题可能降低收敛速度和模型质量。在本工作中,我们提出FedPBS,一种结合了FedBS和FedProx互补思想的联邦学习算法,以应对这些挑战。FedPBS根据客户端资源动态调整批量大小,以支持平衡且可扩展的参与;同时,对使用小批量的客户端选择性地应用近端校正,以稳定本地更新并减少与全局模型的偏离。在高度非独立同分布(non-IID)设置下,于CIFAR-10和UCI-HAR等基准数据集上进行的实验表明,FedPBS在性能上持续优于包括FedBS、FedGA、MOON和FedProx在内的先进方法。结果显示出在极端数据异构性下稳健的性能提升,平滑的损失曲线表明其在多样化联邦环境中具有稳定的收敛性。在严重的非独立同分布条件下,FedPBS在UCI-HAR和CIFAR-10数据集上持续优于最先进的联邦学习基线方法,同时保持了稳定可靠的收敛。