Undersampled CT volumes minimize acquisition time and radiation exposure but introduce artifacts degrading image quality and diagnostic utility. Reducing these artifacts is critical for high-quality imaging. We propose a computationally efficient hybrid deep-learning framework that combines the strengths of 2D and 3D models. First, a 2D U-Net operates on individual slices of undersampled CT volumes to extract feature maps. These slice-wise feature maps are then stacked across the volume and used as input to a 3D decoder, which utilizes contextual information across slices to predict an artifact-free 3D CT volume. The proposed two-stage approach balances the computational efficiency of 2D processing with the volumetric consistency provided by 3D modeling. The results show substantial improvements in inter-slice consistency in coronal and sagittal direction with low computational overhead. This hybrid framework presents a robust and efficient solution for high-quality 3D CT image post-processing. The code of this project can be found on github: https://github.com/J-3TO/2D-3DCNN_sparseview/.
翻译:欠采样CT体数据虽能缩短采集时间并降低辐射剂量,但会引入伪影,导致图像质量与诊断价值下降。抑制此类伪影对获得高质量影像至关重要。本研究提出一种计算高效的混合深度学习框架,融合二维与三维模型优势。首先,二维U-Net对欠采样CT体数据的各独立切片进行处理以提取特征图;随后,将这些切片级特征图沿体数据维度堆叠,并输入至三维解码器。该解码器利用切片间的上下文信息,预测出无伪影的三维CT体数据。所提出的两阶段方法在二维处理的计算效率与三维建模提供的体数据一致性之间取得平衡。实验结果表明,该方法在冠状面与矢状面方向上显著提升了切片间一致性,且计算开销较低。该混合框架为高质量三维CT图像后处理提供了稳健高效的解决方案。本项目代码发布于:https://github.com/J-3TO/2D-3DCNN_sparseview/。