The Newton method has been widely adopted to achieve certified unlearning. A critical assumption in existing approaches is that the data requested for unlearning are selected i.i.d.(independent and identically distributed). However,the problem of certified unlearning under non-i.i.d. deletions remains largely unexplored. In practice, unlearning requests are inherently biased, leading to non-i.i.d. deletions and causing distribution shifts between the original and retained datasets. In this paper, we show that certified unlearning with the Newton method becomes inefficient and ineffective under non-i.i.d. unlearning sets. We then propose a better certified unlearning approach by performing a distribution-aware certified unlearning framework based on iterative Newton updates constrained by a trust region. Our method provides a closer approximation to the retrained model and yields a tighter pre-run bound on the gradient residual, thereby ensuring efficient (epsilon, delta)-certified unlearning. To demonstrate its practical effectiveness under distribution shift, we also conduct extensive experiments across multiple evaluation metrics, providing a comprehensive assessment of our approach.
翻译:牛顿方法已被广泛采用以实现认证遗忘。现有方法的一个关键假设是请求遗忘的数据是独立同分布选择的。然而,非独立同分布删除下的认证遗忘问题在很大程度上仍未得到探索。实际上,遗忘请求本质上是存在偏差的,这导致了非独立同分布的删除,并引起原始数据集与保留数据集之间的分布偏移。本文中,我们证明了在非独立同分布遗忘集下,使用牛顿方法的认证遗忘变得低效且无效。随后,我们提出了一种更优的认证遗忘方法,通过执行一种基于信任域约束的迭代牛顿更新的分布感知认证遗忘框架。我们的方法提供了对重训练模型更接近的近似,并对梯度残差产生了更紧的预运行界,从而确保了高效的(ε, δ)-认证遗忘。为了证明其在分布偏移下的实际有效性,我们还基于多种评估指标进行了广泛的实验,为我们的方法提供了全面的评估。