In the field of machine unlearning, certified unlearning has been extensively studied in convex machine learning models due to its high efficiency and strong theoretical guarantees. However, its application to deep neural networks (DNNs), known for their highly nonconvex nature, still poses challenges. To bridge the gap between certified unlearning and DNNs, we propose several simple techniques to extend certified unlearning methods to nonconvex objectives. To reduce the time complexity, we develop an efficient computation method by inverse Hessian approximation without compromising certification guarantees. In addition, we extend our discussion of certification to nonconvergence training and sequential unlearning, considering that real-world users can send unlearning requests at different time points. Extensive experiments on three real-world datasets demonstrate the efficacy of our method and the advantages of certified unlearning in DNNs.
翻译:在机器遗忘领域,认证遗忘因其高效率和强理论保证而在凸机器学习模型中得到广泛研究。然而,将其应用于以高度非凸性著称的深度神经网络(DNNs)仍面临挑战。为弥合认证遗忘与DNNs之间的鸿沟,我们提出了几种简单技术,将认证遗忘方法扩展到非凸目标。为降低时间复杂度,我们通过逆Hessian近似开发了一种高效计算方法,且不损害认证保证。此外,考虑到现实世界用户可能在不同时间点发送遗忘请求,我们将认证的讨论扩展到非收敛训练和顺序遗忘场景。在三个真实世界数据集上的大量实验证明了我们方法的有效性以及认证遗忘在DNNs中的优势。