While existing federated learning approaches primarily focus on aggregating local models to construct a global model, in realistic settings, some clients may be reluctant to share their private models due to the inclusion of privacy-sensitive information. Knowledge distillation, which can extract model knowledge without accessing model parameters, is well-suited for this federated scenario. However, most distillation methods in federated learning (federated distillation) require a proxy dataset, which is difficult to obtain in the real world. Therefore, in this paper, we introduce a distributed three-player Generative Adversarial Network (GAN) to implement data-free mutual distillation and propose an effective method called FedDTG. We confirmed that the fake samples generated by GAN can make federated distillation more efficient and robust. Additionally, the distillation process between clients can deliver good individual client performance while simultaneously acquiring global knowledge and protecting data privacy. Our extensive experiments on benchmark vision datasets demonstrate that our method outperforms other federated distillation algorithms in terms of generalization.
翻译:尽管现有的联邦学习方法主要集中于聚合本地模型以构建全局模型,但在现实场景中,部分客户端可能因模型包含隐私敏感信息而不愿共享其私有模型。知识蒸馏能够在不访问模型参数的情况下提取模型知识,因此非常适合此类联邦场景。然而,联邦学习中的大多数蒸馏方法(联邦蒸馏)需要代理数据集,而这在现实世界中难以获取。为此,本文引入了一种分布式三方生成对抗网络(GAN)来实现无数据的相互蒸馏,并提出了一种名为FedDTG的有效方法。我们证实,由GAN生成的伪样本可以使联邦蒸馏更加高效和鲁棒。此外,客户端之间的蒸馏过程能够在保护数据隐私的同时,既获得良好的个体客户端性能,又获取全局知识。我们在基准视觉数据集上进行的大量实验表明,本方法在泛化能力方面优于其他联邦蒸馏算法。