Background: High-resolution MRI is critical for diagnosis, but long acquisition times limit clinical use. Super-resolution (SR) can enhance resolution post-scan, yet existing deep learning methods face fidelity-efficiency trade-offs. Purpose: To develop a computationally efficient and accurate deep learning framework for MRI SR that preserves anatomical detail for clinical integration. Materials and Methods: We propose a novel SR framework combining multi-head selective state-space models (MHSSM) with a lightweight channel MLP. The model uses 2D patch extraction with hybrid scanning to capture long-range dependencies. Each MambaFormer block integrates MHSSM, depthwise convolutions, and gated channel mixing. Evaluation used 7T brain T1 MP2RAGE maps (n=142) and 1.5T prostate T2w MRI (n=334). Comparisons included Bicubic interpolation, GANs (CycleGAN, Pix2pix, SPSR), transformers (SwinIR), Mamba (MambaIR), and diffusion models (I2SB, Res-SRDiff). Results: Our model achieved superior performance with exceptional efficiency. For 7T brain data: SSIM=0.951+-0.021, PSNR=26.90+-1.41 dB, LPIPS=0.076+-0.022, GMSD=0.083+-0.017, significantly outperforming all baselines (p<0.001). For prostate data: SSIM=0.770+-0.049, PSNR=27.15+-2.19 dB, LPIPS=0.190+-0.095, GMSD=0.087+-0.013. The framework used only 0.9M parameters and 57 GFLOPs, reducing parameters by 99.8% and computation by 97.5% versus Res-SRDiff, while outperforming SwinIR and MambaIR in accuracy and efficiency. Conclusion: The proposed framework provides an efficient, accurate MRI SR solution, delivering enhanced anatomical detail across datasets. Its low computational demand and state-of-the-art performance show strong potential for clinical translation.
翻译:背景:高分辨率磁共振成像(MRI)对诊断至关重要,但长采集时间限制了其临床应用。超分辨率(SR)技术可在扫描后提升图像分辨率,然而现有深度学习方法面临保真度与计算效率的权衡问题。目的:开发一种计算高效且精确的深度学习框架用于MRI超分辨率,以保留解剖细节并满足临床整合需求。材料与方法:我们提出了一种新颖的超分辨率框架,将多头选择性状态空间模型(MHSSM)与轻量级通道多层感知器相结合。该模型采用二维图像块提取与混合扫描策略以捕获长程依赖关系。每个MambaFormer模块集成MHSSM、深度可分离卷积及门控通道混合机制。评估使用7T脑部T1 MP2RAGE图谱(n=142)和1.5T前列腺T2加权MRI数据(n=334)。对比方法包括双三次插值、生成对抗网络(CycleGAN、Pix2pix、SPSR)、Transformer模型(SwinIR)、Mamba架构(MambaIR)以及扩散模型(I2SB、Res-SRDiff)。结果:本模型在取得卓越性能的同时展现出优异的计算效率。对于7T脑部数据:SSIM=0.951±0.021,PSNR=26.90±1.41 dB,LPIPS=0.076±0.022,GMSD=0.083±0.017,显著优于所有基线方法(p<0.001)。对于前列腺数据:SSIM=0.770±0.049,PSNR=27.15±2.19 dB,LPIPS=0.190±0.095,GMSD=0.087±0.013。该框架仅需0.9M参数量和57 GFLOPs计算量,相较于Res-SRDiff分别降低99.8%的参数量和97.5%的计算量,同时在精度和效率上超越SwinIR与MambaIR。结论:所提出的框架为MRI超分辨率提供了高效精确的解决方案,能够在不同数据集中实现增强的解剖细节呈现。其低计算需求与先进性能表现显示出强大的临床转化潜力。