Copyright infringement may occur when a generative model produces samples substantially similar to some copyrighted data that it had access to during the training phase. The notion of access usually refers to including copyrighted samples directly in the training dataset, which one may inspect to identify an infringement. We argue that such visual auditing largely overlooks a concealed copyright infringement, where one constructs a disguise that looks drastically different from the copyrighted sample yet still induces the effect of training Latent Diffusion Models on it. Such disguises only require indirect access to the copyrighted material and cannot be visually distinguished, thus easily circumventing the current auditing tools. In this paper, we provide a better understanding of such disguised copyright infringement by uncovering the disguises generation algorithm, the revelation of the disguises, and importantly, how to detect them to augment the existing toolbox. Additionally, we introduce a broader notion of acknowledgment for comprehending such indirect access.
翻译:版权侵权可能发生在生成模型生成的样本与训练阶段可访问的受版权保护数据高度相似时。访问概念通常指将受版权保护样本直接纳入训练数据集,可通过检查数据集识别侵权行为。我们认为,这种视觉审计严重忽视了隐蔽的版权侵权行为——即构造一种与受版权保护样本外观截然不同、却能促使潜在扩散模型以其为训练效果的伪装。此类伪装仅需对受版权材料进行间接访问,且无法通过视觉区分,从而轻易规避现有审计工具。本文通过揭示伪装生成算法、伪装的暴露方式以及如何检测伪装以扩充现有工具箱,为理解此类伪装版权侵权提供了更深入的视角。此外,我们引入更广泛的认知概念,以阐释这种间接访问行为。