The universality of deep neural networks across different modalities and their generalization capabilities to unseen domains play an essential role in medical image segmentation. The recent Segment Anything Model (SAM) has demonstrated its potential in both settings. However, the huge computational costs, demand for manual annotations as prompts and conflict-prone decoding process of SAM degrade its generalizability and applicability in clinical scenarios. To address these issues, we propose an efficient self-prompting SAM for universal domain-generalized medical image segmentation, named ESP-MedSAM. Specifically, we first devise the Multi-Modal Decoupled Knowledge Distillation (MMDKD) strategy to construct a lightweight semi-parameter sharing image encoder that produces discriminative visual features for diverse modalities. Further, we introduce the Self-Patch Prompt Generator (SPPG) to automatically generate high-quality dense prompt embeddings for guiding segmentation decoding. Finally, we design the Query-Decoupled Modality Decoder (QDMD) that leverages a one-to-one strategy to provide an independent decoding channel for every modality. Extensive experiments indicate that ESP-MedSAM outperforms state-of-the-arts in diverse medical imaging segmentation tasks, displaying superior modality universality and generalization capabilities. Especially, ESP-MedSAM uses only 4.5\% parameters compared to SAM-H. The source code is available at https://github.com/xq141839/ESP-MedSAM.
翻译:深度神经网络在不同模态间的普适性及其对未见领域的泛化能力在医学图像分割中起着至关重要的作用。近期提出的Segment Anything Model (SAM) 在上述两方面均展现出潜力。然而,SAM巨大的计算成本、对人工标注提示的需求以及易冲突的解码过程,限制了其在临床场景中的泛化能力和适用性。为解决这些问题,我们提出了一种用于通用领域泛化医学图像分割的高效自提示SAM,命名为ESP-MedSAM。具体而言,我们首先设计了多模态解耦知识蒸馏(MMDKD)策略,以构建一个轻量级的半参数共享图像编码器,该编码器能为不同模态生成具有区分性的视觉特征。进一步,我们引入了自补丁提示生成器(SPPG),以自动生成高质量的密集提示嵌入来指导分割解码。最后,我们设计了查询解耦模态解码器(QDMD),它利用一对一策略为每个模态提供独立的解码通道。大量实验表明,ESP-MedSAM在多种医学图像分割任务中优于现有最先进方法,展现出卓越的模态普适性和泛化能力。特别地,与SAM-H相比,ESP-MedSAM仅使用了其4.5%的参数。源代码可在 https://github.com/xq141839/ESP-MedSAM 获取。