Diffusion models have demonstrated significant potential in image generation. However, their ability to replicate training data presents a privacy risk, particularly when the training data includes confidential information. Existing mitigation strategies primarily focus on augmenting the training dataset, leaving the impact of diffusion model architecture under explored. In this paper, we address this gap by examining and mitigating the impact of the model structure, specifically the skip connections in the diffusion model's U-Net model. We first present our observation on a trade-off in the skip connections. While they enhance image generation quality, they also reinforce the memorization of training data, increasing the risk of replication. To address this, we propose a replication-aware U-Net (RAU-Net) architecture that incorporates information transfer blocks into skip connections that are less essential for image quality. Recognizing the potential impact of RAU-Net on generation quality, we further investigate and identify specific timesteps during which the impact on memorization is most pronounced. By applying RAU-Net selectively at these critical timesteps, we couple our novel diffusion model with a targeted training and inference strategy, forming a framework we refer to as LoyalDiffusion. Extensive experiments demonstrate that LoyalDiffusion outperforms the state-of-the-art replication mitigation method achieving a 48.63% reduction in replication while maintaining comparable image quality.
翻译:扩散模型在图像生成方面展现出巨大潜力。然而,其复制训练数据的能力带来了隐私风险,尤其是在训练数据包含机密信息时。现有的缓解策略主要集中于扩充训练数据集,而扩散模型架构的影响尚未得到充分探索。本文通过研究并缓解模型结构的影响——特别是扩散模型中U-Net的跳跃连接——来填补这一空白。我们首先揭示了跳跃连接中存在的一种权衡:虽然它们提升了图像生成质量,但也强化了对训练数据的记忆,从而增加了复制风险。为解决此问题,我们提出一种复制感知U-Net(RAU-Net)架构,该架构在那些对图像质量影响较小的跳跃连接中引入了信息传递模块。考虑到RAU-Net可能对生成质量产生影响,我们进一步研究并识别了记忆效应最为显著的具体时间步。通过在这些关键时间步选择性应用RAU-Net,我们将新颖的扩散模型与针对性训练及推理策略相结合,构建了一个称为LoyalDiffusion的框架。大量实验表明,LoyalDiffusion在保持相当图像质量的同时,将复制率降低了48.63%,优于当前最先进的复制缓解方法。