Machine unlearning algorithms, designed for selective removal of training data from models, have emerged as a promising approach to growing privacy concerns. In this work, we expose a critical yet underexplored vulnerability in the deployment of unlearning systems: the assumption that the data requested for removal is always part of the original training set. We present a threat model where an attacker can degrade model accuracy by submitting adversarial unlearning requests for data not present in the training set. We propose white-box and black-box attack algorithms and evaluate them through a case study on image classification tasks using the CIFAR-10 and ImageNet datasets, targeting a family of widely used unlearning methods. Our results show extremely poor test accuracy following the attack: 3.6% on CIFAR-10 and 0.4% on ImageNet for white-box attacks, and 8.5% on CIFAR-10 and 1.3% on ImageNet for black-box attacks. Additionally, we evaluate various verification mechanisms to detect the legitimacy of unlearning requests and reveal the challenges in verification, as most of the mechanisms fail to detect stealthy attacks without severely impairing their ability to process valid requests. These findings underscore the urgent need for research on more robust request verification methods and unlearning protocols, should the deployment of machine unlearning systems become more prevalent in the future.
翻译:机器遗忘算法旨在从模型中选择性移除训练数据,已成为应对日益增长的隐私问题的一种有前景的方法。在本研究中,我们揭示了遗忘系统部署中一个关键但尚未被充分探索的脆弱性:即假设请求移除的数据始终属于原始训练集。我们提出了一种威胁模型,其中攻击者可以通过提交针对训练集中不存在数据的对抗性遗忘请求来降低模型准确性。我们提出了白盒与黑盒攻击算法,并通过在CIFAR-10和ImageNet数据集上针对一系列广泛使用的遗忘方法进行图像分类任务的案例研究来评估这些算法。我们的结果显示攻击后的测试准确率极低:白盒攻击下CIFAR-10为3.6%,ImageNet为0.4%;黑盒攻击下CIFAR-10为8.5%,ImageNet为1.3%。此外,我们评估了多种用于检测遗忘请求合法性的验证机制,并揭示了验证过程中的挑战——大多数机制在无法检测隐蔽攻击的同时,会严重损害其处理有效请求的能力。这些发现强调,如果未来机器遗忘系统的部署变得更加普遍,亟需研究更鲁棒的请求验证方法和遗忘协议。