In this paper, we present XctDiff, an algorithm framework for reconstructing CT from a single radiograph, which decomposes the reconstruction process into two easily controllable tasks: feature extraction and CT reconstruction. Specifically, we first design a progressive feature extraction strategy that is able to extract robust 3D priors from radiographs. Then, we use the extracted prior information to guide the CT reconstruction in the latent space. Moreover, we design a homogeneous spatial codebook to improve the reconstruction quality further. The experimental results show that our proposed method achieves state-of-the-art reconstruction performance and overcomes the blurring issue. We also apply XctDiff on self-supervised pre-training task. The effectiveness indicates that it has promising additional applications in medical image analysis. The code is available at:https://github.com/qingze-bai/XctDiff
翻译:本文提出XctDiff,一种从单张X射线图像重建CT的算法框架,该框架将重建过程分解为两个易于控制的任务:特征提取与CT重建。具体而言,我们首先设计渐进式特征提取策略,能够从X射线图像中提取鲁棒的三维先验信息。随后,利用提取的先验信息在隐空间引导CT重建过程。此外,我们设计了同质空间码本以进一步提升重建质量。实验结果表明,所提方法实现了最先进的重建性能,并克服了图像模糊问题。我们还将XctDiff应用于自监督预训练任务,其有效性表明该方法在医学图像分析领域具有广阔的拓展应用前景。代码已开源:https://github.com/qingze-bai/XctDiff