Federated Unlearning (FU) aims to efficiently remove the influence of specific client data from a federated model while preserving utility for the remaining clients. However, three key challenges remain: (1) existing unlearning objectives often compromise model utility or increase vulnerability to Membership Inference Attacks (MIA); (2) there is a persistent conflict between forgetting and utility, where further unlearning inevitably harms retained performance; and (3) support for concurrent multi-client unlearning is poor, as gradient conflicts among clients degrade the quality of forgetting. To address these issues, we propose FUPareto, an efficient unlearning framework via Pareto-augmented optimization. We first introduce the Minimum Boundary Shift (MBS) Loss, which enforces unlearning by suppressing the target class logit below the highest non-target class logit; this can improve the unlearning efficiency and mitigate MIA risks. During the unlearning process, FUPareto performs Pareto improvement steps to preserve model utility and executes Pareto expansion to guarantee forgetting. Specifically, during Pareto expansion, the framework integrates a Null-Space Projected Multiple Gradient Descent Algorithm (MGDA) to decouple gradient conflicts. This enables effective, fair, and concurrent unlearning for multiple clients while minimizing utility degradation. Extensive experiments across diverse scenarios demonstrate that FUPareto consistently outperforms state-of-the-art FU methods in both unlearning efficacy and retained utility.
翻译:联邦遗忘(FU)旨在高效地从联邦模型中移除特定客户端数据的影响,同时保持对剩余客户端的效用。然而,三个关键挑战依然存在:(1)现有的遗忘目标常常会损害模型效用或增加对成员推理攻击(MIA)的脆弱性;(2)遗忘与效用之间存在持续冲突,进一步遗忘不可避免地会损害保留性能;(3)对并发多客户端遗忘的支持不足,因为客户端间的梯度冲突会降低遗忘质量。为解决这些问题,我们提出了FUPareto,一种通过帕累托增强优化的高效遗忘框架。我们首先引入了最小边界偏移(MBS)损失,通过将目标类别的逻辑值抑制至低于最高非目标类别的逻辑值来强制遗忘;这可以提高遗忘效率并降低MIA风险。在遗忘过程中,FUPareto执行帕累托改进步骤以保持模型效用,并执行帕累托扩展以保证遗忘。具体而言,在帕累托扩展阶段,该框架集成了零空间投影多重梯度下降算法(MGDA)以解耦梯度冲突。这使得能够为多个客户端实现有效、公平且并发的遗忘,同时最小化效用损失。在不同场景下的大量实验表明,FUPareto在遗忘效能和保留效用方面均持续优于最先进的联邦遗忘方法。