X-ray Computed Tomography (CT) is one of the most important diagnostic imaging techniques in clinical applications. Sparse-view CT imaging reduces the number of projection views to a lower radiation dose and alleviates the potential risk of radiation exposure. Most existing deep learning (DL) and deep unfolding sparse-view CT reconstruction methods: 1) do not fully use the projection data; 2) do not always link their architecture designs to a mathematical theory; 3) do not flexibly deal with multi-sparse-view reconstruction assignments. This paper aims to use mathematical ideas and design optimal DL imaging algorithms for sparse-view tomography reconstructions. We propose a novel dual-domain deep unfolding unified framework that offers a great deal of flexibility for multi-sparse-view CT reconstruction with different sampling views through a single model. This framework combines the theoretical advantages of model-based methods with the superior reconstruction performance of DL-based methods, resulting in the expected generalizability of DL. We propose a refinement module that utilizes unfolding projection domain to refine full-sparse-view projection errors, as well as an image domain correction module that distills multi-scale geometric error corrections to reconstruct sparse-view CT. This provides us with a new way to explore the potential of projection information and a new perspective on designing network architectures. All parameters of our proposed framework are learnable end to end, and our method possesses the potential to be applied to plug-and-play reconstruction. Extensive experiments demonstrate that our framework is superior to other existing state-of-the-art methods. Our source codes are available at https://github.com/fanxiaohong/MVMS-RCN.
翻译:X射线计算机断层扫描(CT)是临床应用中最关键的诊断成像技术之一。稀疏视角CT成像通过减少投影视角数量来降低辐射剂量,从而缓解辐射暴露的潜在风险。现有的大多数深度学习(DL)及深度展开稀疏视角CT重建方法存在以下问题:1)未能充分利用投影数据;2)其架构设计并非始终与数学理论相关联;3)无法灵活处理多稀疏视角重建任务。本文旨在运用数学思想,为稀疏视角断层扫描重建设计最优的深度学习成像算法。我们提出了一种新颖的双域深度展开统一框架,该框架通过单一模型为不同采样视角的多稀疏视角CT重建提供了极大的灵活性。该框架结合了基于模型方法的理论优势与基于深度学习方法的重建性能优势,从而实现了深度学习所期望的泛化能力。我们提出了一个利用展开投影域来精细化全稀疏视角投影误差的修正模块,以及一个通过蒸馏多尺度几何误差校正来重建稀疏视角CT的图像域校正模块。这为我们探索投影信息潜力提供了新途径,也为设计网络架构提供了新视角。所提框架的所有参数均可端到端学习,且该方法具备应用于即插即用重建的潜力。大量实验表明,我们的框架优于其他现有的先进方法。源代码发布于 https://github.com/fanxiaohong/MVMS-RCN。