Adversarial learning helps generative models translate MRI from source to target sequence when lacking paired samples. However, implementing MRI synthesis with adversarial learning in clinical settings is challenging due to training instability and mode collapse. To address this issue, we leverage intermediate sequences to estimate the common latent space among multi-sequence MRI, enabling the reconstruction of distinct sequences from the common latent space. We propose a generative model that compresses discrete representations of each sequence to estimate the Gaussian distribution of vector-quantized common (VQC) latent space between multiple sequences. Moreover, we improve the latent space consistency with contrastive learning and increase model stability by domain augmentation. Experiments using BraTS2021 dataset show that our non-adversarial model outperforms other GAN-based methods, and VQC latent space aids our model to achieve (1) anti-interference ability, which can eliminate the effects of noise, bias fields, and artifacts, and (2) solid semantic representation ability, with the potential of one-shot segmentation. Our code is publicly available.
翻译:当缺乏配对样本时,对抗性学习可帮助生成模型将MRI从源序列转换到目标序列。然而,由于训练不稳定和模式崩溃问题,在临床环境中使用对抗性学习实现MRI合成具有挑战性。为解决这一问题,我们利用中间序列来估计多序列MRI间的公共潜在空间,从而能够从该公共潜在空间重建不同序列。我们提出一种生成模型,该模型通过压缩各序列的离散表示来估计多序列间向量量化公共(VQC)潜在空间的高斯分布。此外,我们通过对比学习提升潜在空间的一致性,并借助域增广增强模型稳定性。基于BraTS2021数据集的实验表明,我们的非对抗模型性能优于其他基于GAN的方法,且VQC潜在空间帮助模型实现了:(1)抗干扰能力——能够消除噪声、偏置场及伪影的影响;(2)稳健的语义表征能力——具备一次性分割的潜力。我们的代码已公开。