In recent years, deep learning models comprising transformer components have pushed the performance envelope in medical image synthesis tasks. Contrary to convolutional neural networks (CNNs) that use static, local filters, transformers use self-attention mechanisms to permit adaptive, non-local filtering to sensitively capture long-range context. However, this sensitivity comes at the expense of substantial model complexity, which can compromise learning efficacy particularly on relatively modest-sized imaging datasets. Here, we propose a novel adversarial model for multi-modal medical image synthesis, I2I-Mamba, that leverages selective state space modeling (SSM) to efficiently capture long-range context while maintaining local precision. To do this, I2I-Mamba injects channel-mixed Mamba (cmMamba) blocks in the bottleneck of a convolutional backbone. In cmMamba blocks, SSM layers are used to learn context across the spatial dimension and channel-mixing layers are used to learn context across the channel dimension of feature maps. Comprehensive demonstrations are reported for imputing missing images in multi-contrast MRI and MRI-CT protocols. Our results indicate that I2I-Mamba offers superior performance against state-of-the-art CNN- and transformer-based methods in synthesizing target-modality images.
翻译:近年来,包含Transformer组件的深度学习模型在医学图像合成任务中不断推动性能极限。与使用静态局部滤波器的卷积神经网络(CNN)不同,Transformer利用自注意力机制实现自适应、非局部的滤波,以灵敏地捕捉长程上下文信息。然而,这种灵敏性是以巨大的模型复杂度为代价的,这可能会损害学习效率,尤其是在规模相对有限的成像数据集上。本文提出了一种用于多模态医学图像合成的新型对抗模型——I2I-Mamba,它利用选择性状态空间建模(SSM)来高效捕捉长程上下文,同时保持局部精度。为实现这一目标,I2I-Mamba在卷积主干网络的瓶颈处注入了通道混合Mamba(cmMamba)模块。在cmMamba模块中,SSM层用于学习特征图空间维度上的上下文,而通道混合层则用于学习通道维度上的上下文。本文报告了在多对比度MRI和MRI-CT协议中补全缺失图像的全面验证结果。我们的结果表明,在合成目标模态图像方面,I2I-Mamba相较于最先进的基于CNN和Transformer的方法具有更优越的性能。