Inference-time alignment for diffusion models aims to adapt a pre-trained diffusion model toward a target distribution without retraining the base score network, thereby preserving the generative capacity of the base model while enforcing desired properties at the inference time. A central mechanism for achieving such alignment is guidance, which modifies the sampling dynamics through an additional drift term. In this work, we introduce Doob's matching, a novel framework for guidance estimation grounded in Doob's $h$-transform. Our approach formulates guidance as the gradient of logarithm of an underlying Doob's $h$-function and employs gradient-penalized regression to simultaneously estimate both the $h$-function and its gradient, resulting in a consistent estimator of the guidance. Theoretically, we establish non-asymptotic convergence rates for the estimated guidance. Moreover, we analyze the resulting controllable diffusion processes and prove non-asymptotic convergence guarantees for the generated distributions in the 2-Wasserstein distance.
翻译:扩散模型的推理时对齐旨在将预训练的扩散模型适配到目标分布,而无需重新训练基础得分网络,从而在保持基础模型生成能力的同时,在推理时强制执行所需属性。实现这种对齐的核心机制是引导技术,其通过附加的漂移项修改采样动态。本文中,我们提出了Doob匹配——一种基于Doob $h$-变换的引导估计新框架。我们的方法将引导形式化为底层Doob $h$-函数对数的梯度,并采用梯度惩罚回归同时估计$h$-函数及其梯度,从而得到引导的一致估计量。理论上,我们为估计的引导建立了非渐近收敛速率。此外,我们分析了所得的可控扩散过程,并在2-Wasserstein距离下证明了生成分布的非渐近收敛保证。