The growing trend of legal disputes over the unauthorized use of data in machine learning (ML) systems highlights the urgent need for reliable data-use auditing mechanisms to ensure accountability and transparency in ML. In this paper, we present the first proactive instance-level data-use auditing method designed to enable data owners to audit the use of their individual data instances in ML models, providing more fine-grained auditing results. Our approach integrates any black-box membership inference technique with a sequential hypothesis test, providing a quantifiable and tunable false-detection rate. We evaluate our method on three types of visual ML models: image classifiers, visual encoders, and Contrastive Image-Language Pretraining (CLIP) models. In additional, we apply our method to evaluate the performance of two state-of-the-art approximate unlearning methods. Our findings reveal that neither method successfully removes the influence of the unlearned data instances from image classifiers and CLIP models even if sacrificing model utility by $10.33\%$.
翻译:机器学习(ML)系统中未经授权使用数据引发的法律纠纷日益增多,这突显了对可靠数据使用审计机制的迫切需求,以确保机器学习的问责制与透明度。本文提出了首个主动式实例级数据使用审计方法,旨在使数据所有者能够审计其单个数据实例在ML模型中的使用情况,从而提供更细粒度的审计结果。我们的方法将任意黑盒成员推理技术与序贯假设检验相结合,提供了可量化且可调的误检率。我们在三类视觉ML模型上评估了该方法:图像分类器、视觉编码器以及对比式图像-语言预训练(CLIP)模型。此外,我们应用该方法评估了两种最先进的近似遗忘方法的性能。我们的研究结果表明,即使以牺牲模型效用$10.33\%$为代价,这两种方法均未能成功消除图像分类器和CLIP模型中已遗忘数据实例的影响。