Background: High-resolution MRI is critical for diagnosis, but long acquisition times limit clinical use. Super-resolution (SR) can enhance resolution post-scan, yet existing deep learning methods face fidelity-efficiency trade-offs. Purpose: To develop a computationally efficient and accurate deep learning framework for MRI SR that preserves anatomical detail for clinical integration. Materials and Methods: We propose a novel SR framework combining multi-head selective state-space models (MHSSM) with a lightweight channel MLP. The model uses 2D patch extraction with hybrid scanning to capture long-range dependencies. Each MambaFormer block integrates MHSSM, depthwise convolutions, and gated channel mixing. Evaluation used 7T brain T1 MP2RAGE maps (n=142) and 1.5T prostate T2w MRI (n=334). Comparisons included Bicubic interpolation, GANs (CycleGAN, Pix2pix, SPSR), transformers (SwinIR), Mamba (MambaIR), and diffusion models (I2SB, Res-SRDiff). Results: Our model achieved superior performance with exceptional efficiency. For 7T brain data: SSIM=0.951+-0.021, PSNR=26.90+-1.41 dB, LPIPS=0.076+-0.022, GMSD=0.083+-0.017, significantly outperforming all baselines (p<0.001). For prostate data: SSIM=0.770+-0.049, PSNR=27.15+-2.19 dB, LPIPS=0.190+-0.095, GMSD=0.087+-0.013. The framework used only 0.9M parameters and 57 GFLOPs, reducing parameters by 99.8% and computation by 97.5% versus Res-SRDiff, while outperforming SwinIR and MambaIR in accuracy and efficiency. Conclusion: The proposed framework provides an efficient, accurate MRI SR solution, delivering enhanced anatomical detail across datasets. Its low computational demand and state-of-the-art performance show strong potential for clinical translation.
翻译:背景:高分辨率磁共振成像(MRI)对诊断至关重要,但较长的采集时间限制了其临床应用。超分辨率(SR)技术可在扫描后提升图像分辨率,然而现有的深度学习方法面临保真度与效率之间的权衡。目的:开发一种计算高效且准确的深度学习框架用于MRI超分辨率,以保留解剖细节,便于临床整合。材料与方法:我们提出了一种新颖的超分辨率框架,将多头选择性状态空间模型(MHSSM)与轻量级通道多层感知器(MLP)相结合。该模型采用二维图像块提取与混合扫描策略,以捕获长程依赖关系。每个MambaFormer模块集成了MHSSM、深度卷积以及门控通道混合机制。评估使用了7T脑部T1 MP2RAGE图像(n=142)和1.5T前列腺T2加权MRI图像(n=334)。对比方法包括双三次插值、生成对抗网络(CycleGAN、Pix2pix、SPSR)、Transformer模型(SwinIR)、Mamba模型(MambaIR)以及扩散模型(I2SB、Res-SRDiff)。结果:我们的模型实现了卓越的性能与极高的效率。对于7T脑部数据:SSIM=0.951±0.021,PSNR=26.90±1.41 dB,LPIPS=0.076±0.022,GMSD=0.083±0.017,显著优于所有基线方法(p<0.001)。对于前列腺数据:SSIM=0.770±0.049,PSNR=27.15±2.19 dB,LPIPS=0.190±0.095,GMSD=0.087±0.013。该框架仅使用0.9M参数和57 GFLOPs计算量,与Res-SRDiff相比,参数减少了99.8%,计算量降低了97.5%,同时在准确性和效率上均优于SwinIR和MambaIR。结论:所提出的框架提供了一种高效、准确的MRI超分辨率解决方案,能够在不同数据集中提供增强的解剖细节。其低计算需求与先进的性能显示出强大的临床转化潜力。