Alzheimer's Disease (AD) is the most common form of dementia, characterised by cognitive decline and biomarkers such as tau-proteins. Tau-positron emission tomography (tau-PET), which employs a radiotracer to selectively bind, detect, and visualise tau protein aggregates within the brain, is valuable for early AD diagnosis but is less accessible due to high costs, limited availability, and its invasive nature. Image synthesis with neural networks enables the generation of tau-PET images from more accessible T1-weighted magnetic resonance imaging (MRI) images. To ensure high-quality image synthesis, we propose a cyclic 2.5D perceptual loss combined with mean squared error and structural similarity index measure (SSIM) losses. The cyclic 2.5D perceptual loss sequentially calculates the axial 2D average perceptual loss for a specified number of epochs, followed by the coronal and sagittal planes for the same number of epochs. This sequence is cyclically performed, with intervals reducing as the cycles repeat. We conduct supervised synthesis of tau-PET images from T1w MRI images using 516 paired T1w MRI and tau-PET 3D images from the ADNI database. For the collected data, we perform preprocessing, including intensity standardisation for tau-PET images from each manufacturer. The proposed loss, applied to generative 3D U-Net and its variants, outperformed those with 2.5D and 3D perceptual losses in SSIM and peak signal-to-noise ratio (PSNR). In addition, including the cyclic 2.5D perceptual loss to the original losses of GAN-based image synthesis models such as CycleGAN and Pix2Pix improves SSIM and PSNR by at least 2% and 3%. Furthermore, by-manufacturer PET standardisation helps the models in synthesising high-quality images than min-max PET normalisation.
翻译:阿尔茨海默病(AD)是最常见的痴呆症形式,其特征是认知能力下降和tau蛋白等生物标志物。Tau正电子发射断层扫描(tau-PET)利用放射性示踪剂选择性结合、检测并可视化大脑内的tau蛋白聚集体,对早期AD诊断具有重要价值,但由于成本高昂、可用性有限及其侵入性,其可及性较低。基于神经网络的图像合成技术能够从更易获取的T1加权磁共振成像(MRI)图像生成tau-PET图像。为确保高质量的图像合成,我们提出了一种结合均方误差和结构相似性指数(SSIM)损失的循环2.5D感知损失。该循环2.5D感知损失按顺序计算指定轮次的轴向二维平均感知损失,随后计算相同轮次的冠状面和矢状面损失。此序列循环执行,且随着循环重复,间隔逐渐减小。我们使用来自ADNI数据库的516对T1w MRI和tau-PET三维图像,对从T1w MRI到tau-PET图像进行了有监督合成。针对收集的数据,我们进行了预处理,包括对来自不同制造商的tau-PET图像进行强度标准化。所提出的损失函数应用于生成式3D U-Net及其变体,在SSIM和峰值信噪比(PSNR)方面优于使用2.5D和3D感知损失的模型。此外,在基于GAN的图像合成模型(如CycleGAN和Pix2Pix)的原始损失中加入循环2.5D感知损失,可将SSIM和PSNR分别提升至少2%和3%。进一步地,按制造商进行PET标准化相比最小-最大PET归一化,更有助于模型合成高质量图像。