Diffusion models have quickly risen in popularity for their ability to model complex distributions and perform effective posterior sampling. Unfortunately, the iterative nature of these generative models makes them computationally expensive and unsuitable for real-time sequential inverse problems such as ultrasound imaging. Considering the strong temporal structure across sequences of frames, we propose a novel approach that models the transition dynamics to improve the efficiency of sequential diffusion posterior sampling in conditional image synthesis. Through modeling sequence data using a video vision transformer (ViViT) transition model based on previous diffusion outputs, we can initialize the reverse diffusion trajectory at a lower noise scale, greatly reducing the number of iterations required for convergence. We demonstrate the effectiveness of our approach on a real-world dataset of high frame rate cardiac ultrasound images and show that it achieves the same performance as a full diffusion trajectory while accelerating inference 25$\times$, enabling real-time posterior sampling. Furthermore, we show that the addition of a transition model improves the PSNR up to 8\% in cases with severe motion. Our method opens up new possibilities for real-time applications of diffusion models in imaging and other domains requiring real-time inference.
翻译:扩散模型因其能够建模复杂分布并执行有效的后验采样而迅速流行。然而,这些生成模型的迭代特性使其计算成本高昂,不适用于超声成像等实时序列逆问题。考虑到帧序列间存在强烈的时间结构,我们提出一种新颖方法,通过建模转移动力学来提高条件图像合成中序列扩散后验采样的效率。通过使用基于先前扩散输出的视频视觉变换器(ViViT)转移模型对序列数据进行建模,我们可以在较低噪声尺度下初始化反向扩散轨迹,从而大幅减少收敛所需的迭代次数。我们在高帧率心脏超声图像的真实数据集上验证了本方法的有效性,结果表明其在实现与完整扩散轨迹相同性能的同时,将推理速度提升25倍,实现了实时后验采样。此外,我们证明在存在剧烈运动的情况下,转移模型的加入可将峰值信噪比提升高达8%。本方法为扩散模型在成像及其他需要实时推理的领域中的实时应用开辟了新可能。