Generative modelling paradigms based on denoising diffusion processes have emerged as a leading candidate for conditional sampling in inverse problems. In many real-world applications, we often have access to large, expensively trained unconditional diffusion models, which we aim to exploit for improving conditional sampling. Most recent approaches are motivated heuristically and lack a unifying framework, obscuring connections between them. Further, they often suffer from issues such as being very sensitive to hyperparameters, being expensive to train or needing access to weights hidden behind a closed API. In this work, we unify conditional training and sampling using the mathematically well-understood Doob's h-transform. This new perspective allows us to unify many existing methods under a common umbrella. Under this framework, we propose DEFT (Doob's h-transform Efficient FineTuning), a new approach for conditional generation that simply fine-tunes a very small network to quickly learn the conditional $h$-transform, while keeping the larger unconditional network unchanged. DEFT is much faster than existing baselines while achieving state-of-the-art performance across a variety of linear and non-linear benchmarks. On image reconstruction tasks, we achieve speedups of up to 1.6$\times$, while having the best perceptual quality on natural images and reconstruction performance on medical images. Further, we also provide initial experiments on protein motif scaffolding and outperform reconstruction guidance methods.
翻译:基于去噪扩散过程的生成建模范式已成为逆问题中条件采样的主要候选方法。在许多实际应用中,我们通常能够获取经过昂贵训练的大型无条件扩散模型,并希望利用它们来改进条件采样。现有方法大多基于启发式动机,缺乏统一框架,使得它们之间的关联难以厘清。此外,这些方法常存在对超参数极度敏感、训练成本高昂或需要访问封闭API背后的权重等问题。本研究通过数学上严格定义的杜布$h$-变换,统一了条件训练与采样过程。这一新视角使得我们能够将多种现有方法纳入共同框架。基于该框架,我们提出DEFT(杜布$h$-变换高效微调)——一种仅需微调极小网络来快速学习条件$h$-变换,同时保持大型无条件网络不变的条件生成新方法。DEFT在多种线性和非线性基准测试中均达到最先进性能,且速度显著优于现有基线方法。在图像重建任务中,我们实现了高达1.6$\times$的加速,同时在自然图像上获得最佳感知质量,在医学图像上取得最优重建性能。此外,我们在蛋白质基架设计任务上的初步实验也超越了基于重建指导的方法。