One-shot federated learning (FL) limits the communication between the server and clients to a single round, which largely decreases the privacy leakage risks in traditional FLs requiring multiple communications. However, we find existing one-shot FL frameworks are vulnerable to distributional heterogeneity due to their insufficient focus on data heterogeneity while concentrating predominantly on model heterogeneity. Filling this gap, we propose a unified, data-free, one-shot federated learning framework (FedHydra) that can effectively address both model and data heterogeneity. Rather than applying existing value-only learning mechanisms, a structure-value learning mechanism is proposed in FedHydra. Specifically, a new stratified learning structure is proposed to cover data heterogeneity, and the value of each item during computation reflects model heterogeneity. By this design, the data and model heterogeneity issues are simultaneously monitored from different aspects during learning. Consequently, FedHydra can effectively mitigate both issues by minimizing their inherent conflicts. We compared FedHydra with three SOTA baselines on four benchmark datasets. Experimental results show that our method outperforms the previous one-shot FL methods in both homogeneous and heterogeneous settings.
翻译:一次性联邦学习(FL)将服务器与客户端之间的通信限制为单轮,这大幅降低了传统FL因需多次通信而带来的隐私泄露风险。然而,我们发现现有的一次性FL框架由于在主要关注模型异质性的同时,对数据异质性的关注不足,容易受到分布异质性的影响。为填补这一空白,我们提出了一个统一的、无数据的一次性联邦学习框架(FedHydra),该框架能有效解决模型和数据异质性。FedHydra并未采用现有的仅值学习机制,而是提出了一种结构-值学习机制。具体而言,我们提出了一种新的分层学习结构以涵盖数据异质性,而计算中每个项目的值则反映了模型异质性。通过这种设计,在学习过程中可以从不同方面同时监控数据与模型异质性问题。因此,FedHydra能够通过最小化其内在冲突来有效缓解这两个问题。我们在四个基准数据集上将FedHydra与三种SOTA基线方法进行了比较。实验结果表明,我们的方法在同质和异质设置下均优于先前的一次性FL方法。