Diffusion models are becoming defector generative models, which generate exceptionally high-resolution image data. Training effective diffusion models require massive real data, which is privately owned by distributed parties. Each data party can collaboratively train diffusion models in a federated learning manner by sharing gradients instead of the raw data. In this paper, we study the privacy leakage risk of gradient inversion attacks. First, we design a two-phase fusion optimization, GIDM, to leverage the well-trained generative model itself as prior knowledge to constrain the inversion search (latent) space, followed by pixel-wise fine-tuning. GIDM is shown to be able to reconstruct images almost identical to the original ones. Considering a more privacy-preserving training scenario, we then argue that locally initialized private training noise $\epsilon$ and sampling step t may raise additional challenges for the inversion attack. To solve this, we propose a triple-optimization GIDM+ that coordinates the optimization of the unknown data, $\epsilon$ and $t$. Our extensive evaluation results demonstrate the vulnerability of sharing gradient for data protection of diffusion models, even high-resolution images can be reconstructed with high quality.
翻译:扩散模型正成为主流的生成模型,能够生成极高分辨率的图像数据。训练有效的扩散模型需要大量真实数据,而这些数据通常由分布式参与方私有持有。各数据方可通过联邦学习方式共享梯度而非原始数据,以协作训练扩散模型。本文研究了梯度反演攻击的隐私泄露风险。首先,我们设计了一种两阶段融合优化方法GIDM,利用训练完备的生成模型本身作为先验知识来约束反演搜索(隐)空间,随后进行像素级微调。实验表明GIDM能够重建与原始图像几乎完全一致的图像。考虑到更具隐私保护性的训练场景,我们进一步指出本地初始化的私有训练噪声$\epsilon$和采样步长$t$可能为反演攻击带来额外挑战。为此,我们提出了三重优化方法GIDM+,协同优化未知数据、$\epsilon$和$t$。大量评估结果表明,扩散模型通过共享梯度进行数据保护存在脆弱性,即使高分辨率图像也能被高质量重建。