Copyright infringement may occur when a generative model produces samples substantially similar to some copyrighted data that it had access to during the training phase. The notion of access usually refers to including copyrighted samples directly in the training dataset, which one may inspect to identify an infringement. We argue that such visual auditing largely overlooks a concealed copyright infringement, where one constructs a disguise that looks drastically different from the copyrighted sample yet still induces the effect of training Latent Diffusion Models on it. Such disguises only require indirect access to the copyrighted material and cannot be visually distinguished, thus easily circumventing the current auditing tools. In this paper, we provide a better understanding of such disguised copyright infringement by uncovering the disguises generation algorithm, the revelation of the disguises, and importantly, how to detect them to augment the existing toolbox. Additionally, we introduce a broader notion of acknowledgment for comprehending such indirect access.
翻译:版权侵权可能发生在生成模型生成的样本与训练阶段访问的某些受版权保护数据实质性相似时。访问通常指将受版权保护样本直接纳入训练数据集,可通过审查该数据集识别侵权行为。我们认为此类视觉审计极大忽视了隐藏的版权侵权行为——即构造一种与受版权保护样本外观存在显著差异的伪装,却仍能诱导潜藏扩散模型对其进行训练的效果。此类伪装仅需间接访问版权材料且无法通过视觉区分,从而轻易规避现有审计工具。本文通过揭示伪装生成算法、伪装表现形式,特别是如何检测此类伪装以增强现有工具集,为理解此类伪装的版权侵权提供了更深入的认知。此外,我们引入更广义的致谢概念以阐释此类间接访问行为。