Certified unlearning based on differential privacy offers strong guarantees but remains largely impractical: the noisy fine-tuning approaches proposed so far achieve these guarantees but severely reduce model accuracy. We propose sequential noise scheduling, which distributes the noise budget across orthogonal subspaces of the parameter space, rather than injecting it all at once. This simple modification mitigates the destructive effect of noise while preserving the original certification guarantees. We extend the analysis of noisy fine-tuning to the subspace setting, proving that the same $(\varepsilon,δ)$ privacy budget is retained. Empirical results on image classification benchmarks show that our approach substantially improves accuracy after unlearning while remaining robust to membership inference attacks. These results show that certified unlearning can achieve both rigorous guarantees and practical utility.
翻译:基于差分隐私的认证遗忘提供了强有力的保证,但在很大程度上仍不实用:目前提出的噪声微调方法虽然实现了这些保证,却严重降低了模型精度。我们提出了顺序噪声调度方法,它将噪声预算分布在参数空间的正交子空间中,而不是一次性全部注入。这一简单的修改在保留原始认证保证的同时,减轻了噪声的破坏性影响。我们将噪声微调的分析扩展到子空间设置,证明相同的 $(\varepsilon,\delta)$ 隐私预算得以保留。在图像分类基准测试上的实证结果表明,我们的方法在遗忘后显著提高了精度,同时保持了对成员推理攻击的鲁棒性。这些结果表明,认证遗忘能够同时实现严格的保证和实际效用。