Self-supervised learning has been increasingly investigated for low-dose computed tomography (LDCT) image denoising, as it alleviates the dependence on paired normal-dose CT (NDCT) data, which are often difficult to collect. However, many existing self-supervised blind-spot denoising methods suffer from training inefficiencies and suboptimal performance due to restricted receptive fields. To mitigate this issue, we propose a novel Progressive $\mathcal{J}$-invariant Learning that maximizes the use of $\mathcal{J}$-invariant to enhance LDCT denoising performance. We introduce a step-wise blind-spot denoising mechanism that enforces conditional independence in a progressive manner, enabling more fine-grained learning for denoising. Furthermore, we explicitly inject a combination of controlled Gaussian and Poisson noise during training to regularize the denoising process and mitigate overfitting. Extensive experiments on the Mayo LDCT dataset demonstrate that the proposed method consistently outperforms existing self-supervised approaches and achieves performance comparable to, or better than, several representative supervised denoising methods.
翻译:自监督学习在低剂量计算机断层扫描(LDCT)图像去噪中日益受到关注,因为它减少了对配对正常剂量CT(NDCT)数据的依赖,而此类数据通常难以收集。然而,由于感受野受限,许多现有的自监督盲点去噪方法存在训练效率低下和性能欠佳的问题。为缓解此问题,我们提出了一种新颖的渐进式$\mathcal{J}$不变学习,其最大化利用$\mathcal{J}$不变性以提升LDCT去噪性能。我们引入了一种逐步盲点去噪机制,以渐进方式强制条件独立性,从而实现对去噪任务更细粒度的学习。此外,我们在训练过程中显式注入受控高斯噪声与泊松噪声的组合,以正则化去噪过程并缓解过拟合。在Mayo LDCT数据集上的大量实验表明,所提方法持续优于现有自监督方法,并取得了与多种代表性监督去噪方法相当或更优的性能。