We investigate the construction of gradient-guided conditional diffusion models for reconstructing private images, focusing on the adversarial interplay between differential privacy noise and the denoising capabilities of diffusion models. While current gradient-based reconstruction methods struggle with high-resolution images due to computational complexity and prior knowledge requirements, we propose two novel methods that require minimal modifications to the diffusion model's generation process and eliminate the need for prior knowledge. Our approach leverages the strong image generation capabilities of diffusion models to reconstruct private images starting from randomly generated noise, even when a small amount of differentially private noise has been added to the gradients. We also conduct a comprehensive theoretical analysis of the impact of differential privacy noise on the quality of reconstructed images, revealing the relationship among noise magnitude, the architecture of attacked models, and the attacker's reconstruction capability. Additionally, extensive experiments validate the effectiveness of our proposed methods and the accuracy of our theoretical findings, suggesting new directions for privacy risk auditing using conditional diffusion models.
翻译:本研究探讨了梯度引导条件扩散模型在重建私有图像方面的构建方法,重点分析了差分隐私噪声与扩散模型去噪能力之间的对抗性相互作用。尽管当前基于梯度的重建方法因计算复杂性和先验知识要求而在处理高分辨率图像时面临挑战,我们提出了两种新颖的方法,这些方法仅需对扩散模型的生成过程进行最小修改,且无需先验知识。我们的方法利用扩散模型强大的图像生成能力,从随机生成的噪声开始重建私有图像,即使在梯度中添加了少量差分隐私噪声的情况下也能实现。我们还对差分隐私噪声对重建图像质量的影响进行了全面的理论分析,揭示了噪声幅度、被攻击模型架构与攻击者重建能力之间的关系。此外,大量实验验证了我们所提方法的有效性以及理论发现的准确性,为使用条件扩散模型进行隐私风险审计指明了新的方向。