Ambient diffusion is a recently proposed framework for training diffusion models using corrupted data. Both Ambient Diffusion and alternative SURE-based approaches for learning diffusion models from corrupted data resort to approximations which deteriorate performance. We present the first framework for training diffusion models that provably sample from the uncorrupted distribution given only noisy training data, solving an open problem in this space. Our key technical contribution is a method that uses a double application of Tweedie's formula and a consistency loss function that allows us to extend sampling at noise levels below the observed data noise. We also provide further evidence that diffusion models memorize from their training sets by identifying extremely corrupted images that are almost perfectly reconstructed, raising copyright and privacy concerns. Our method for training using corrupted samples can be used to mitigate this problem. We demonstrate this by fine-tuning Stable Diffusion XL to generate samples from a distribution using only noisy samples. Our framework reduces the amount of memorization of the fine-tuning dataset, while maintaining competitive performance.
翻译:环境扩散是近期提出的利用受损数据训练扩散模型的框架。无论是环境扩散还是基于SURE的替代方法,在从受损数据学习扩散模型时都采用了近似处理,导致性能下降。我们提出了首个可证明仅使用噪声训练数据即可从无损分布中采样的扩散模型训练框架,解决了该领域的一个开放性问题。我们的核心技术贡献在于:通过双重应用Tweedie公式并结合一致性损失函数,实现了对低于观测数据噪声水平的采样扩展。我们还通过识别几乎能被完美重建的极端受损图像,进一步证实扩散模型会记忆训练集数据,这引发了版权和隐私方面的担忧。我们提出的利用受损样本进行训练的方法可用于缓解此问题。我们通过微调Stable Diffusion XL仅使用噪声样本生成目标分布样本的实验验证了这一点。该框架在保持竞争力的同时,显著减少了微调数据集的记忆程度。