Federated Learning (FL) has emerged as an essential framework for distributed machine learning, especially with its potential for privacy-preserving data processing. However, existing FL frameworks struggle to address statistical and model heterogeneity, which severely impacts model performance. While Heterogeneous Federated Learning (HtFL) introduces prototype-based strategies to address the challenges, current approaches face limitations in achieving optimal separation of prototypes. This paper presents FedORGP, a novel HtFL algorithm designed to improve global prototype separation through orthogonality regularization, which not only encourages intra-class prototype similarity but also significantly expands the inter-class angular separation. With the guidance of the global prototype, each client keeps its embeddings aligned with the corresponding prototype in the feature space, promoting directional independence that integrates seamlessly with the cross-entropy (CE) loss. We provide theoretical proof of FedORGP's convergence under non-convex conditions. Extensive experiments demonstrate that FedORGP outperforms seven state-of-the-art baselines, achieving up to 10.12\% accuracy improvement in scenarios where statistical and model heterogeneity coexist.
翻译:联邦学习(Federated Learning, FL)已成为分布式机器学习的关键框架,尤其在隐私保护数据处理方面展现出巨大潜力。然而,现有联邦学习框架难以有效应对统计异质性和模型异质性,这严重影响了模型性能。尽管异构联邦学习(Heterogeneous Federated Learning, HtFL)引入了基于原型的策略以应对这些挑战,但现有方法在实现原型最优分离方面仍存在局限。本文提出FedORGP——一种新颖的异构联邦学习算法,旨在通过正交正则化改进全局原型分离。该方法不仅促进类内原型相似性,同时显著扩大类间角度分离。在全局原型的引导下,各客户端使其嵌入特征与特征空间中对应的原型保持对齐,从而促进方向独立性,并与交叉熵(Cross-Entropy, CE)损失函数无缝集成。我们提供了FedORGP在非凸条件下的收敛性理论证明。大量实验表明,FedORGP在七种先进基线方法中表现最优,在统计异质性与模型异质性共存的场景下,准确率最高可提升10.12%。