In Federated Learning (FL), multiple clients collaboratively train a model without sharing raw data. This paradigm can be further enhanced by Differential Privacy (DP) to protect local data from information inference attacks and is thus termed DPFL. An emerging privacy requirement, ``the right to be forgotten'' for clients, poses new challenges to DPFL but remains largely unexplored. Despite numerous studies on federated unlearning (FU), they are inapplicable to DPFL because the noise introduced by the DP mechanism compromises their effectiveness and efficiency. In this paper, we propose Federated Unlearning with Indistinguishability (FUI) to unlearn the local data of a target client in DPFL for the first time. FUI consists of two main steps: local model retraction and global noise calibration, resulting in an unlearning model that is statistically indistinguishable from the retrained model. Specifically, we demonstrate that the noise added in DPFL can endow the unlearning model with a certain level of indistinguishability after local model retraction, and then fortify the degree of unlearning through global noise calibration. Additionally, for the efficient and consistent implementation of the proposed FUI, we formulate a two-stage Stackelberg game to derive optimal unlearning strategies for both the server and the target client. Privacy and convergence analyses confirm theoretical guarantees, while experimental results based on four real-world datasets illustrate that our proposed FUI achieves superior model performance and higher efficiency compared to mainstream FU schemes. Simulation results further verify the optimality of the derived unlearning strategies.
翻译:在联邦学习(FL)中,多个客户端无需共享原始数据即可协作训练模型。该范式可通过差分隐私(DP)进一步增强,以保护本地数据免受信息推断攻击,因此被称为DPFL。一项新兴的隐私要求——客户端的“被遗忘权”——给DPFL带来了新的挑战,但目前尚未得到充分探索。尽管已有大量关于联邦遗忘学习(FU)的研究,但这些方法并不适用于DPFL,因为DP机制引入的噪声会损害其有效性和效率。本文首次提出具有不可区分性的联邦遗忘学习(FUI),用于在DPFL中遗忘目标客户端的本地数据。FUI包含两个主要步骤:本地模型回撤与全局噪声校准,从而生成一个与重新训练模型在统计上不可区分的遗忘模型。具体而言,我们证明了DPFL中添加的噪声可在本地模型回撤后赋予遗忘模型一定程度的不可区分性,进而通过全局噪声校准强化遗忘程度。此外,为实现所提FUI的高效且一致的实施,我们构建了一个两阶段Stackelberg博弈,以推导服务器与目标客户端的最优遗忘策略。隐私与收敛性分析证实了理论保证,基于四个真实数据集的实验结果表明,与主流FU方案相比,我们提出的FUI实现了更优的模型性能与更高的效率。仿真结果进一步验证了所推导遗忘策略的最优性。