Driven by the right to be forgotten (RTBF), machine unlearning has become an essential requirement for privacy-preserving machine learning. However, its realization in decentralized federated learning (DFL) remains largely unexplored. In DFL, clients exchange local updates only with neighbors, causing model information to propagate and mix across the network. As a result, when a client requests data deletion, its influence is implicitly embedded throughout the system, making removal difficult without centralized coordination. We propose a novel certified unlearning framework for DFL based on Newton-style updates. Our approach first quantifies how a client's data influence propagates during training. Leveraging curvature information of the loss with respect to the target data, we then construct corrective updates using Newton-style approximations. To ensure scalability, we approximate second-order information via Fisher information matrices. The resulting updates are perturbed with calibrated noise and broadcast through the network to eliminate residual influence across clients. We theoretically prove that our approach satisfies the formal definition of certified unlearning, ensuring that the unlearned model is difficult to distinguish from a retrained model without the deleted data. We also establish utility bounds showing that the unlearned model remains close to retraining from scratch. Extensive experiments across diverse decentralized settings demonstrate the effectiveness and efficiency of our framework.
翻译:受"被遗忘权"(RTBF)驱动,机器遗忘已成为隐私保护机器学习的基本要求。然而,其在去中心化联邦学习(DFL)中的实现仍鲜有探索。在DFL中,客户端仅与相邻节点交换本地更新,导致模型信息在网络中传播与混合。因此,当客户端请求数据删除时,其影响已隐式嵌入整个系统,若无中心化协调则难以消除。我们提出一种基于牛顿风格更新的新型DFL认证遗忘框架。该方法首先量化客户端数据影响在训练过程中的传播机制,通过利用损失函数相对于目标数据的曲率信息,构建基于牛顿风格近似的修正更新。为保障可扩展性,我们通过费舍尔信息矩阵近似二阶信息。生成的更新经校准噪声扰动后,通过网络广播以消除跨客户端的残留影响。我们从理论上证明该方法满足认证遗忘的形式化定义,确保遗忘后模型与删除数据后重新训练的模型难以区分。同时,我们建立了效用边界,证明遗忘模型与从头重新训练的模型保持接近。跨多种去中心化场景的广泛实验验证了本框架的有效性与高效性。