Artists are increasingly concerned about advancements in image generation models that can closely replicate their unique artistic styles. In response, several protection tools against style mimicry have been developed that incorporate small adversarial perturbations into artworks published online. In this work, we evaluate the effectiveness of popular protections -- with millions of downloads -- and show they only provide a false sense of security. We find that low-effort and "off-the-shelf" techniques, such as image upscaling, are sufficient to create robust mimicry methods that significantly degrade existing protections. Through a user study, we demonstrate that all existing protections can be easily bypassed, leaving artists vulnerable to style mimicry. We caution that tools based on adversarial perturbations cannot reliably protect artists from the misuse of generative AI, and urge the development of alternative non-technological solutions.
翻译:艺术家们日益担忧能够高度复现其独特艺术风格的图像生成模型的进展。作为回应,目前已开发出多种针对风格模仿的防护工具,这些工具在在线发布的艺术品中嵌入了微小的对抗性扰动。在本研究中,我们评估了下载量达数百万的流行防护工具的有效性,并证明它们仅能提供虚假的安全感。我们发现,低成本和"现成"的技术(如图像超分辨率)足以构建鲁棒的模仿方法,从而显著削弱现有防护措施的效果。通过一项用户研究,我们证明所有现有防护措施均可被轻易绕过,使艺术家在风格模仿面前处于脆弱状态。我们警示,基于对抗性扰动的工具无法可靠保护艺术家免受生成式AI的滥用,并敦促开发替代性的非技术解决方案。