Federated Learning (FL) is designed as a decentralized, privacy-preserving machine learning paradigm that enables multiple clients to collaboratively train a model without sharing their data. In real-world scenarios, however, clients often have heterogeneous computational resources and hold non-independent and identically distributed data (non-IID), which poses significant challenges during training. Personalized Federated Learning (PFL) has emerged to address these issues by customizing models for each client based on their unique data distribution. Despite its potential, existing PFL approaches typically overlook the coexistence of model and data heterogeneity arising from clients with diverse computational capabilities. To overcome this limitation, we propose a novel method, called Progressive Parameter Alignment (FedPPA), which progressively aligns the weights of common layers across clients with the global model's weights. Our approach not only mitigates inconsistencies between global and local models during client updates, but also preserves client's local knowledge, thereby enhancing personalization robustness in non-IID settings. To further enhance the global model performance while retaining strong personalization, we also integrate entropy-based weighted averaging into the FedPPA framework. Experiments on three image classification datasets, including MNIST, FMNIST, and CIFAR-10, demonstrate that FedPPA consistently outperforms existing FL algorithms, achieving superior performance in personalized adaptation.
翻译:联邦学习(FL)是一种去中心化、保护隐私的机器学习范式,允许多个客户端在不共享数据的情况下协作训练模型。然而在实际场景中,客户端通常具有异构的计算资源,且持有非独立同分布数据(non-IID),这给训练过程带来了重大挑战。个性化联邦学习(PFL)应运而生,它通过根据每个客户端独特的数据分布定制模型来解决这些问题。尽管潜力巨大,现有PFL方法通常忽略了由不同计算能力客户端引起的模型异构性与数据异构性共存的问题。为克服这一局限,我们提出了一种名为渐进式参数对齐(FedPPA)的新方法,该方法逐步将各客户端公共层的权重与全局模型的权重对齐。我们的方法不仅缓解了客户端更新过程中全局模型与局部模型之间的不一致性,同时保留了客户端的局部知识,从而增强了非IID设置下个性化模型的鲁棒性。为了在保持强个性化的同时进一步提升全局模型性能,我们还将基于熵的加权平均集成到FedPPA框架中。在MNIST、FMNIST和CIFAR-10三个图像分类数据集上的实验表明,FedPPA始终优于现有联邦学习算法,在个性化适应方面实现了更优的性能。