Magnetic Resonance Imaging (MRI) provides detailed tissue information, but its clinical application is limited by long acquisition time, high cost, and restricted resolution. Image translation has recently gained attention as a strategy to address these limitations. Although Pix2Pix has been widely applied in medical image translation, its potential has not been fully explored. In this study, we propose an enhanced Pix2Pix framework that integrates Squeeze-and-Excitation Residual Networks (SEResNet) and U-Net++ to improve image generation quality and structural fidelity. SEResNet strengthens critical feature representation through channel attention, while U-Net++ enhances multi-scale feature fusion. A simplified PatchGAN discriminator further stabilizes training and refines local anatomical realism. Experimental results demonstrate that under few-shot conditions with fewer than 500 images, the proposed method achieves consistent structural fidelity and superior image quality across multiple intra-modality MRI translation tasks, showing strong generalization ability. These results suggest an effective extension of Pix2Pix for medical image translation.
翻译:磁共振成像(MRI)能够提供详细的组织信息,但其临床应用受到采集时间长、成本高以及分辨率受限等因素的限制。图像翻译作为一种应对这些限制的策略,近来受到广泛关注。尽管Pix2Pix已广泛应用于医学图像翻译,但其潜力尚未得到充分挖掘。在本研究中,我们提出了一种增强的Pix2Pix框架,该框架集成了Squeeze-and-Excitation残差网络(SEResNet)和U-Net++,以提高图像生成质量和结构保真度。SEResNet通过通道注意力机制强化关键特征表示,而U-Net++则增强了多尺度特征融合。一个简化的PatchGAN判别器进一步稳定了训练过程并优化了局部解剖结构的真实性。实验结果表明,在少于500张图像的少样本条件下,所提方法在多种模态内MRI翻译任务中均能实现一致的结构保真度和优异的图像质量,展现出强大的泛化能力。这些结果表明了Pix2Pix在医学图像翻译领域的一种有效扩展。