Denoising diffusion models have emerged as the go-to generative framework for solving inverse problems in imaging. A critical concern regarding these models is their performance on out-of-distribution tasks, which remains an under-explored challenge. Using a diffusion model on an out-of-distribution dataset, realistic reconstructions can be generated, but with hallucinating image features that are uniquely present in the training dataset. To address this discrepancy during train-test time and improve reconstruction accuracy, we introduce a novel sampling framework called Steerable Conditional Diffusion. Specifically, this framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement. Utilising our proposed method, we achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities, advancing the robust deployment of denoising diffusion models in real-world applications.
翻译:去噪扩散模型已成为解决成像逆问题的首选生成框架。这些模型在分布外任务上的性能是一个关键问题,目前仍是一个探索不足的挑战。在分布外数据集上使用扩散模型可以生成逼真的重建图像,但会产生仅在训练数据集中存在的幻觉图像特征。为了解决训练-测试期间出现的这种差异并提高重建精度,我们引入了一种称为可操纵条件扩散的新型采样框架。具体而言,该框架仅基于可用测量提供的信息,在图像重建的同时对扩散模型进行自适应调整。利用我们提出的方法,我们在多种成像模态上实现了分布外性能的显著提升,推动了去噪扩散模型在现实世界应用中的稳健部署。