Federated Learning (FL) has received much attention in recent years. However, although clients are not required to share their data in FL, the global model itself can implicitly remember clients' local data. Therefore, it's necessary to effectively remove the target client's data from the FL global model to ease the risk of privacy leakage and implement ``the right to be forgotten". Federated Unlearning (FU) has been considered a promising way to remove data without full retraining. But the model utility easily suffers significant reduction during unlearning due to the gradient conflicts. Furthermore, when conducting the post-training to recover the model utility, the model is prone to move back and revert what has already been unlearned. To address these issues, we propose Federated Unlearning with Orthogonal Steepest Descent (FedOSD). We first design an unlearning Cross-Entropy loss to overcome the convergence issue of the gradient ascent. A steepest descent direction for unlearning is then calculated in the condition of being non-conflicting with other clients' gradients and closest to the target client's gradient. This benefits to efficiently unlearn and mitigate the model utility reduction. After unlearning, we recover the model utility by maintaining the achievement of unlearning. Finally, extensive experiments in several FL scenarios verify that FedOSD outperforms the SOTA FU algorithms in terms of unlearning and model utility.
翻译:近年来,联邦学习(FL)备受关注。然而,尽管联邦学习不要求客户端共享其数据,但全局模型本身仍可能隐式地记住客户端的本地数据。因此,有必要从联邦学习全局模型中有效移除目标客户端的数据,以降低隐私泄露风险并实现“被遗忘权”。联邦遗忘学习(FU)被视为一种无需完全重新训练即可移除数据的有前景方法。但由于梯度冲突,在遗忘过程中模型效用极易遭受显著下降。此外,当进行后训练以恢复模型效用时,模型容易回退并逆转已完成遗忘的内容。为解决这些问题,我们提出基于正交最速下降的联邦遗忘学习方法(FedOSD)。我们首先设计了一种遗忘交叉熵损失函数,以克服梯度上升的收敛问题。随后,在与其他客户端梯度无冲突且最接近目标客户端梯度的条件下,计算用于遗忘的最速下降方向。这有助于高效实现遗忘并缓解模型效用下降。完成遗忘后,我们通过保持遗忘成果来恢复模型效用。最后,在多种联邦学习场景中的大量实验验证了FedOSD在遗忘效果与模型效用方面均优于当前最先进的联邦遗忘学习算法。