Diffusion models (DMs) are one of the most widely used generative models for producing high quality images. However, a flurry of recent papers points out that DMs are least private forms of image generators, by extracting a significant number of near-identical replicas of training images from DMs. Existing privacy-enhancing techniques for DMs, unfortunately, do not provide a good privacy-utility tradeoff. In this paper, we aim to improve the current state of DMs with differential privacy (DP) by adopting the $\textit{Latent}$ Diffusion Models (LDMs). LDMs are equipped with powerful pre-trained autoencoders that map the high-dimensional pixels into lower-dimensional latent representations, in which DMs are trained, yielding a more efficient and fast training of DMs. Rather than fine-tuning the entire LDMs, we fine-tune only the $\textit{attention}$ modules of LDMs with DP-SGD, reducing the number of trainable parameters by roughly $90\%$ and achieving a better privacy-accuracy trade-off. Our approach allows us to generate realistic, high-dimensional images (256x256) conditioned on text prompts with DP guarantees, which, to the best of our knowledge, has not been attempted before. Our approach provides a promising direction for training more powerful, yet training-efficient differentially private DMs, producing high-quality DP images. Our code is available at https://anonymous.4open.science/r/DP-LDM-4525.
翻译:扩散模型是目前应用最广泛的高质量图像生成模型之一。然而,近期大量研究指出,扩散模型是最缺乏隐私保护的图像生成器,能够从中提取大量与训练图像近乎完全相同的副本。遗憾的是,现有的扩散模型隐私增强技术未能实现良好的隐私-效用平衡。本文旨在通过采用潜在扩散模型来改进当前具备差分隐私保护的扩散模型状态。潜在扩散模型配备了强大的预训练自编码器,可将高维像素映射到低维潜在表示空间,并在该空间训练扩散模型,从而实现更高效、更快速的扩散模型训练。我们并非对整个潜在扩散模型进行微调,而是仅使用差分隐私随机梯度下降对潜在扩散模型的注意力模块进行微调,将可训练参数量减少约90%,并获得更优的隐私-精度权衡。该方法使我们能够在差分隐私保证下生成基于文本提示的高维真实图像(256x256),据我们所知,这是首次实现该目标。本研究为训练更强大、训练效率更高的差分隐私扩散模型提供了可行方向,能够生成高质量的差分隐私图像。代码发布于 https://anonymous.4open.science/r/DP-LDM-4525。