Personalized federated learning (PFL) enables customized models for clients with varying data distributions. However, existing PFL methods often incur high computational and communication costs, limiting their practical application. This paper proposes a novel PFL method, Class-wise Federated Averaging (cwFedAVG), that performs Federated Averaging (FedAVG) class-wise, creating multiple global models per class on the server. Each local model integrates these global models weighted by its estimated local class distribution, derived from the L2-norms of deep network weights, avoiding privacy violations. Afterward, each global model does the same with local models using the same method. We also newly designed Weight Distribution Regularizer (WDR) to further enhance the accuracy of estimating a local class distribution by minimizing the Euclidean distance between the class distribution and the weight norms' distribution. Experimental results demonstrate that cwFedAVG matches or outperforms several existing PFL methods. Notably, cwFedAVG is conceptually simple yet computationally efficient as it mitigates the need for extensive calculation to collaborate between clients by leveraging shared global models. Visualizations provide insights into how cwFedAVG enables local model specialization on respective class distributions while global models capture class-relevant information across clients.
翻译:个性化联邦学习(PFL)能够为具有不同数据分布的客户端定制模型。然而,现有的PFL方法通常伴随着高昂的计算与通信开销,限制了其实际应用。本文提出一种新颖的PFL方法——基于类别的联邦平均(cwFedAVG),该方法以类别为单位执行联邦平均(FedAVG),在服务器端为每个类别创建多个全局模型。每个本地模型通过其估计的本地类别分布(该分布由深度网络权重的L2范数推导得出,避免了隐私侵犯)对这些全局模型进行加权整合。随后,每个全局模型也使用相同方法对本地模型进行类似整合。我们还新设计了权重分布正则化器(WDR),通过最小化类别分布与权重范数分布之间的欧氏距离,进一步提升估计本地类别分布的准确性。实验结果表明,cwFedAvg在性能上匹配或优于多种现有PFL方法。值得注意的是,cwFedAvg概念简洁且计算高效,它通过利用共享的全局模型,减少了对客户端间广泛协作计算的需求。可视化分析揭示了cwFedAvg如何使本地模型在各自类别分布上实现专业化,而全局模型则能捕捉跨客户端的类别相关信息。