Diffusion Models (DMs) benefit from large and diverse datasets for their training. Since this data is often scraped from the Internet without permission from the data owners, this raises concerns about copyright and intellectual property protections. While (illicit) use of data is easily detected for training samples perfectly re-created by a DM at inference time, it is much harder for data owners to verify if their data was used for training when the outputs from the suspect DM are not close replicas. Conceptually, membership inference attacks (MIAs), which detect if a given data point was used during training, present themselves as a suitable tool to address this challenge. However, we demonstrate that existing MIAs are not strong enough to reliably determine the membership of individual images in large, state-of-the-art DMs. To overcome this limitation, we propose CDI, a framework for data owners to identify whether their dataset was used to train a given DM. CDI relies on dataset inference techniques, i.e., instead of using the membership signal from a single data point, CDI leverages the fact that most data owners, such as providers of stock photography, visual media companies, or even individual artists, own datasets with multiple publicly exposed data points which might all be included in the training of a given DM. By selectively aggregating signals from existing MIAs and using new handcrafted methods to extract features for these datasets, feeding them to a scoring model, and applying rigorous statistical testing, CDI allows data owners with as little as 70 data points to identify with a confidence of more than 99% whether their data was used to train a given DM. Thereby, CDI represents a valuable tool for data owners to claim illegitimate use of their copyrighted data.
翻译:扩散模型(DMs)的训练得益于大规模、多样化的数据集。由于这些数据通常是在未经数据所有者许可的情况下从互联网抓取的,这引发了关于版权和知识产权保护的担忧。虽然对于在推理时被扩散模型完美复现的训练样本,(非法)数据使用很容易被检测到,但当可疑扩散模型的输出并非近似复制品时,数据所有者要验证其数据是否被用于训练则困难得多。从概念上讲,用于检测给定数据点是否在训练期间被使用的成员推断攻击(MIAs),为解决这一挑战提供了一个合适的工具。然而,我们证明现有的成员推断攻击不足以可靠地确定单个图像在大型、最先进的扩散模型中的成员资格。为了克服这一限制,我们提出了CDI,一个供数据所有者识别其数据集是否被用于训练给定扩散模型的框架。CDI依赖于数据集推断技术,即,CDI不是使用来自单个数据点的成员信号,而是利用这样一个事实:大多数数据所有者(如库存摄影提供商、视觉媒体公司,甚至个体艺术家)拥有包含多个公开暴露数据点的数据集,这些数据点可能全部被包含在给定扩散模型的训练中。通过选择性地聚合来自现有成员推断攻击的信号,并使用新的手工方法提取这些数据集的特征,将其输入评分模型,并应用严格的统计检验,CDI使仅拥有70个数据点的数据所有者能够以超过99%的置信度识别其数据是否被用于训练给定的扩散模型。因此,CDI是数据所有者主张其版权数据被非法使用的一个宝贵工具。