Membership inference attacks (MIAs) on diffusion models have emerged as potential evidence of unauthorized data usage in training pre-trained diffusion models. These attacks aim to detect the presence of specific images in training datasets of diffusion models. Our study delves into the evaluation of state-of-the-art MIAs on diffusion models and reveals critical flaws and overly optimistic performance estimates in existing MIA evaluation. We introduce CopyMark, a more realistic MIA benchmark that distinguishes itself through the support for pre-trained diffusion models, unbiased datasets, and fair evaluation pipelines. Through extensive experiments, we demonstrate that the effectiveness of current MIA methods significantly degrades under these more practical conditions. Based on our results, we alert that MIA, in its current state, is not a reliable approach for identifying unauthorized data usage in pre-trained diffusion models. To the best of our knowledge, we are the first to discover the performance overestimation of MIAs on diffusion models and present a unified benchmark for more realistic evaluation. Our code is available on GitHub: \url{https://github.com/caradryanl/CopyMark}.
翻译:扩散模型上的成员推断攻击已成为预训练扩散模型训练中未经授权数据使用的潜在证据。这些攻击旨在检测特定图像是否存在于扩散模型的训练数据集中。本研究深入评估了扩散模型上的最先进成员推断攻击方法,揭示了现有评估中存在的关键缺陷和过于乐观的性能估计。我们提出了CopyMark这一更现实的成员推断攻击基准,其独特之处在于支持预训练扩散模型、无偏数据集和公平评估流程。通过大量实验,我们证明在当前更实际的条件下,现有成员推断攻击方法的有效性显著下降。基于研究结果,我们警示在当前状态下,成员推断攻击并非识别预训练扩散模型中未经授权数据使用的可靠方法。据我们所知,我们首次发现了扩散模型上成员推断攻击的性能高估问题,并提出了用于更现实评估的统一基准。我们的代码已在GitHub上开源:\url{https://github.com/caradryanl/CopyMark}。