Score-based diffusion models have recently been extended to infinite-dimensional function spaces, with uses such as inverse problems arising from partial differential equations. In the Bayesian formulation of inverse problems, the aim is to sample from a posterior distribution over functions obtained by conditioning a prior on noisy observations. While diffusion models provide expressive priors in function space, the theory of conditioning them to sample from the posterior remains open. We address this, assuming that either the prior lies in the Cameron-Martin space, or is absolutely continuous with respect to a Gaussian measure. We prove that the models can be conditioned using an infinite-dimensional extension of Doob's $h$-transform, and that the conditional score decomposes into an unconditional score and a guidance term. As the guidance term is intractable, we propose a simulation-free score matching objective (called Supervised Guidance Training) enabling efficient and stable posterior sampling. We illustrate the theory with numerical examples on Bayesian inverse problems in function spaces. In summary, our work offers the first function-space method for fine-tuning trained diffusion models to accurately sample from a posterior.
翻译:基于分数的扩散模型最近已被推广至无限维函数空间,用于解决偏微分方程等领域的反问题。在贝叶斯反问题框架中,目标是通过对含噪声观测值进行先验条件化,从函数后验分布中采样。虽然扩散模型在函数空间中提供了富有表现力的先验,但其条件化采样后验的理论仍待完善。针对此问题,我们假设先验位于Cameron-Martin空间,或相对于高斯测度绝对连续。我们证明了此类模型可通过Doob's $h$-transform的无限维扩展进行条件化,且条件分数可分解为无条件分数与引导项之和。由于引导项难以直接计算,我们提出了一种无需仿真的分数匹配目标(称为监督引导训练),实现了高效稳定的后验采样。我们通过函数空间贝叶斯反问题的数值算例验证了理论。总之,本研究首次提出了在函数空间中微调已训练扩散模型以精确采样后验分布的方法。