Current methods based on deep learning for self-supervised low-dose CT (LDCT) reconstruction, while reducing the dependence on paired data, face the problem of significantly decreased generalization when training with single-dose data and extending to other doses. To enable dose-extensive generalization using only single-dose projection data for training, this work proposes a novel method of Extendable GENeraLization self-supervised Diffusion (EGenDiff) for low-dose CT reconstruction. Specifically, a contextual subdata self-enhancing similarity strategy is designed to provide an initial prior for the subsequent progress. During training, the initial prior is used to combine knowledge distillation with a deep combination of latent diffusion models for optimizing image details. On the stage of inference, the pixel-wise self-correcting fusion technique is proposed for data fidelity enhancement, resulting in extensive generalization of higher and lower doses or even unseen doses. EGenDiff requires only LDCT projection data for training and testing. Comprehensive evaluation on benchmark datasets, clinical data, photon counting CT data, and across all three anatomical planes (transverse, coronal, and sagittal) demonstrates that EGenDiff enables extendable generalization multi-dose, yielding reconstructions that consistently outperform leading existing methods.
翻译:当前基于深度学习的自监督低剂量CT(LDCT)重建方法虽降低了对配对数据的依赖,但在使用单一剂量数据进行训练并扩展至其他剂量时,面临泛化能力显著下降的问题。为实现仅使用单剂量投影数据训练即可获得剂量扩展泛化能力,本研究提出一种用于低剂量CT重建的新型可扩展泛化自监督扩散方法(EGenDiff)。具体而言,设计了一种上下文子数据自增强相似性策略,为后续处理提供初始先验。在训练阶段,利用该初始先验将知识蒸馏与潜在扩散模型的深度结合相融合,以优化图像细节。在推理阶段,提出像素级自校正融合技术以增强数据保真度,从而实现对更高/更低剂量乃至未见剂量的广泛泛化。EGenDiff仅需LDCT投影数据即可完成训练与测试。在基准数据集、临床数据、光子计数CT数据及所有三个解剖平面(横断面、冠状面和矢状面)上的综合评估表明,EGenDiff能够实现可扩展的多剂量泛化,其重建结果在各项指标上持续优于现有主流方法。