The recent Mamba model has shown remarkable adaptability for visual representation learning, including in medical imaging tasks. This study introduces MambaMIR, a Mamba-based model for medical image reconstruction, as well as its Generative Adversarial Network-based variant, MambaMIR-GAN. Our proposed MambaMIR inherits several advantages, such as linear complexity, global receptive fields, and dynamic weights, from the original Mamba model. The innovated arbitrary-mask mechanism effectively adapt Mamba to our image reconstruction task, providing randomness for subsequent Monte Carlo-based uncertainty estimation. Experiments conducted on various medical image reconstruction tasks, including fast MRI and SVCT, which cover anatomical regions such as the knee, chest, and abdomen, have demonstrated that MambaMIR and MambaMIR-GAN achieve comparable or superior reconstruction results relative to state-of-the-art methods. Additionally, the estimated uncertainty maps offer further insights into the reliability of the reconstruction quality. The code is publicly available at https://github.com/ayanglab/MambaMIR.
翻译:近期提出的Mamba模型在视觉表征学习领域(包括医学成像任务)展现出显著适应性。本研究提出基于Mamba的医学图像重建模型MambaMIR及其生成对抗网络变体MambaMIR-GAN。所提出的MambaMIR继承了原始Mamba模型的线性复杂度、全局感受野和动态权重等多项优势。创新设计的任意掩码机制有效适配Mamba至图像重建任务,为后续基于蒙特卡洛的不确定性估计提供随机性支持。在涵盖膝关节、胸腔及腹部等解剖区域的快速MRI与SVCT等多种医学图像重建任务上的实验表明,MambaMIR与MambaMIR-GAN相较现有最优方法取得了相当或更优的重建结果。此外,预估的不确定性图谱为进一步分析重建质量的可靠性提供了新视角。相关代码已在https://github.com/ayanglab/MambaMIR开源。