Heterogeneous federated learning (HFL) aims to ensure effective and privacy-preserving collaboration among different entities. As newly joined clients require significant adjustments and additional training to align with the existing system, the problem of generalizing federated learning models to unseen clients under heterogeneous data has become progressively crucial. Consequently, we highlight two unsolved challenging issues in federated domain generalization: Optimization Divergence and Performance Divergence. To tackle the above challenges, we propose FedRD, a novel heterogeneity-aware federated learning algorithm that collaboratively utilizes parameter-guided global generalization aggregation and local debiased classification to reduce divergences, aiming to obtain an optimal global model for participating and unseen clients. Extensive experiments on public multi-domain datasets demonstrate that our approach exhibits a substantial performance advantage over competing baselines in addressing this specific problem.
翻译:异构联邦学习旨在确保不同实体间有效且保护隐私的协作。由于新加入的客户端需要进行显著调整和额外训练以与现有系统对齐,在异构数据下将联邦学习模型泛化到未见客户端的问题变得日益关键。因此,我们指出了联邦领域泛化中两个尚未解决的挑战性问题:优化差异与性能差异。为应对上述挑战,我们提出FedRD——一种新颖的异构感知联邦学习算法,该算法协同利用参数引导的全局泛化聚合与局部去偏分类来减少差异,旨在为参与及未见客户端获得最优的全局模型。在公开多领域数据集上的大量实验表明,在解决这一特定问题时,我们的方法相较于竞争基线展现出显著的性能优势。