In federated learning, federated unlearning is a technique that provides clients with a rollback mechanism that allows them to withdraw their data contribution without training from scratch. However, existing research has not considered scenarios with skewed label distributions. Unfortunately, the unlearning of a client with skewed data usually results in biased models and makes it difficult to deliver high-quality service, complicating the recovery process. This paper proposes a recovery method of federated unlearning with skewed label distributions. Specifically, we first adopt a strategy that incorporates oversampling with deep learning to supplement the skewed class data for clients to perform recovery training, therefore enhancing the completeness of their local datasets. Afterward, a density-based denoising method is applied to remove noise from the generated data, further improving the quality of the remaining clients' datasets. Finally, all the remaining clients leverage the enhanced local datasets and engage in iterative training to effectively restore the performance of the unlearning model. Extensive evaluations on commonly used federated learning datasets with varying degrees of skewness show that our method outperforms baseline methods in restoring the performance of the unlearning model, particularly regarding accuracy on the skewed class.
翻译:在联邦学习中,联邦遗忘是一种为客户端提供回滚机制的技术,使其能够撤回数据贡献而无需从头训练。然而,现有研究尚未考虑标签分布倾斜的场景。遗憾的是,对具有倾斜数据的客户端进行遗忘操作通常会导致模型产生偏差,难以提供高质量服务,并使恢复过程复杂化。本文提出了一种面向标签分布倾斜的联邦遗忘恢复方法。具体而言,我们首先采用一种结合过采样与深度学习的策略,为客户端补充倾斜类别的数据以执行恢复训练,从而增强其本地数据集的完整性。随后,应用基于密度的去噪方法去除生成数据中的噪声,进一步提升剩余客户端数据集的质量。最后,所有剩余客户端利用增强后的本地数据集进行迭代训练,以有效恢复遗忘模型的性能。在具有不同倾斜程度的常用联邦学习数据集上进行的大量评估表明,本方法在恢复遗忘模型性能方面优于基线方法,尤其在倾斜类别的准确率上表现突出。