In the age of powerful diffusion models such as DALL-E and Stable Diffusion, many in the digital art community have suffered style mimicry attacks due to fine-tuning these models on their works. The ability to mimic an artist's style via text-to-image diffusion models raises serious ethical issues, especially without explicit consent. Glaze, a tool that applies various ranges of perturbations to digital art, has shown significant success in preventing style mimicry attacks, at the cost of artifacts ranging from imperceptible noise to severe quality degradation. The release of Glaze has sparked further discussions regarding the effectiveness of similar protection methods. In this paper, we propose GLEAN- applying I2I generative networks to strip perturbations from Glazed images, evaluating the performance of style mimicry attacks before and after GLEAN on the results of Glaze. GLEAN aims to support and enhance Glaze by highlighting its limitations and encouraging further development.
翻译:在DALL-E和Stable Diffusion等强大扩散模型的时代,数字艺术社区的许多创作者因其作品被用于微调这些模型而遭受风格模仿攻击。通过文本到图像扩散模型模仿艺术家风格的能力引发了严重的伦理问题,尤其是在未经明确同意的情况下。Glaze作为一种对数字艺术作品施加不同程度扰动的工具,在防止风格模仿攻击方面已显示出显著成效,但其代价是产生从难以察觉的噪声到严重质量下降等不同程度的伪影。Glaze的发布引发了关于类似保护方法有效性的进一步讨论。本文提出GLEAN——通过应用图像到图像生成网络来消除Glazed图像中的扰动,并在Glaze处理结果上评估GLEAN应用前后风格模仿攻击的性能表现。GLEAN旨在通过揭示Glaze的局限性并推动其进一步发展,从而支持和增强该保护系统。