Diffusion models have recently emerged as powerful learned priors for Bayesian inverse problems (BIPs). Diffusion-based solvers rely on a presumed likelihood for the observations in BIPs to guide the generation process. However, the link between likelihood and recovery quality for BIPs is unclear in previous works. We bridge this gap by characterizing the posterior approximation error and proving the \emph{stability} of the diffusion-based solvers. Meanwhile, an immediate result of our findings on stability demonstrates the lack of robustness in diffusion-based solvers, which remains unexplored. This can degrade performance when the presumed likelihood mismatches the unknown true data generation processes. To address this issue, we propose a simple yet effective solution, \emph{robust diffusion posterior sampling}, which is provably \emph{robust} and compatible with existing gradient-based posterior samplers. Empirical results on scientific inverse problems and natural image tasks validate the effectiveness and robustness of our method, showing consistent performance improvements under challenging likelihood misspecifications.
翻译:扩散模型近年来已成为贝叶斯逆问题中强大的学习先验。基于扩散的求解器依赖贝叶斯逆问题中对观测值的预设似然来引导生成过程。然而,先前研究未能明确阐明贝叶斯逆问题中似然函数与重建质量之间的关联。我们通过刻画后验近似误差并证明基于扩散的求解器的\emph{稳定性}来弥合这一理论缺口。同时,稳定性研究的直接结论揭示了基于扩散的求解器存在尚未被探索的\emph{鲁棒性缺失}问题。当预设似然函数与未知的真实数据生成过程不匹配时,该缺陷会导致性能下降。为解决此问题,我们提出一种简单而有效的解决方案——\emph{鲁棒扩散后验采样},该方法具有可证明的\emph{鲁棒性},且与现有基于梯度的后验采样器兼容。在科学逆问题与自然图像任务上的实证结果验证了本方法的有效性与鲁棒性,其在具有挑战性的似然设定错误条件下均展现出稳定的性能提升。