Positron emission tomography (PET) offers powerful functional imaging but involves radiation exposure. Efforts to reduce this exposure by lowering the radiotracer dose or scan time can degrade image quality. While using magnetic resonance (MR) images with clearer anatomical information to restore standard-dose PET (SPET) from low-dose PET (LPET) is a promising approach, it faces challenges with the inconsistencies in the structure and texture of multi-modality fusion, as well as the mismatch in out-of-distribution (OOD) data. In this paper, we propose a supervise-assisted multi-modality fusion diffusion model (MFdiff) for addressing these challenges for high-quality PET restoration. Firstly, to fully utilize auxiliary MR images without introducing extraneous details in the restored image, a multi-modality feature fusion module is designed to learn an optimized fusion feature. Secondly, using the fusion feature as an additional condition, high-quality SPET images are iteratively generated based on the diffusion model. Furthermore, we introduce a two-stage supervise-assisted learning strategy that harnesses both generalized priors from simulated in-distribution datasets and specific priors tailored to in-vivo OOD data. Experiments demonstrate that the proposed MFdiff effectively restores high-quality SPET images from multi-modality inputs and outperforms state-of-the-art methods both qualitatively and quantitatively.
翻译:正电子发射断层扫描(PET)提供了强大的功能成像能力,但涉及辐射暴露。通过降低放射性示踪剂剂量或扫描时间来减少这种暴露的努力可能会降低图像质量。虽然利用具有更清晰解剖学信息的磁共振(MR)图像从低剂量PET(LPET)恢复标准剂量PET(SPET)是一种有前景的方法,但它面临着多模态融合中结构与纹理不一致以及分布外(OOD)数据不匹配的挑战。在本文中,我们提出了一种监督辅助多模态融合扩散模型(MFdiff)来解决这些挑战,以实现高质量的PET恢复。首先,为了充分利用辅助MR图像而不在恢复图像中引入无关细节,设计了一个多模态特征融合模块来学习优化的融合特征。其次,以融合特征作为附加条件,基于扩散模型迭代生成高质量的SPET图像。此外,我们引入了一种两阶段监督辅助学习策略,该策略利用了来自模拟分布内数据集的广义先验和针对体内OOD数据定制的特定先验。实验表明,所提出的MFdiff能够有效地从多模态输入中恢复高质量的SPET图像,并且在定性和定量上均优于现有最先进的方法。