We propose a straightforward yet highly effective few-shot fine-tuning strategy for adapting the Segment Anything (SAM) to anatomical segmentation tasks in medical images. Our novel approach revolves around reformulating the mask decoder within SAM, leveraging few-shot embeddings derived from a limited set of labeled images (few-shot collection) as prompts for querying anatomical objects captured in image embeddings. This innovative reformulation greatly reduces the need for time-consuming online user interactions for labeling volumetric images, such as exhaustively marking points and bounding boxes to provide prompts slice by slice. With our method, users can manually segment a few 2D slices offline, and the embeddings of these annotated image regions serve as effective prompts for online segmentation tasks. Our method prioritizes the efficiency of the fine-tuning process by exclusively training the mask decoder through caching mechanisms while keeping the image encoder frozen. Importantly, this approach is not limited to volumetric medical images, but can generically be applied to any 2D/3D segmentation task. To thoroughly evaluate our method, we conducted extensive validation on four datasets, covering six anatomical segmentation tasks across two modalities. Furthermore, we conducted a comparative analysis of different prompting options within SAM and the fully-supervised nnU-Net. The results demonstrate the superior performance of our method compared to SAM employing only point prompts (approximately 50% improvement in IoU) and performs on-par with fully supervised methods whilst reducing the requirement of labeled data by at least an order of magnitude.
翻译:我们提出了一种直接而高效的少样本微调策略,用于将 Segment Anything (SAM) 模型适配到医学图像的解剖结构分割任务中。我们的新方法核心在于重新构建 SAM 内部的掩码解码器,利用从有限标注图像集(少样本集合)中提取的嵌入向量作为提示,来查询图像嵌入中捕获的解剖对象。这种创新的重构极大地减少了对耗时在线用户交互进行体积图像标注的需求,例如逐切片详尽标记点和边界框以提供提示。使用我们的方法,用户可以离线手动分割少量 2D 切片,这些标注图像区域的嵌入向量即可作为在线分割任务的有效提示。我们的方法通过缓存机制专门训练掩码解码器,同时保持图像编码器冻结,从而优先考虑微调过程的效率。重要的是,该方法不仅限于体积医学图像,还可以通用地应用于任何 2D/3D 分割任务。为了全面评估我们的方法,我们在四个数据集上进行了广泛的验证,涵盖两种模态下的六项解剖结构分割任务。此外,我们对 SAM 内部不同的提示选项以及全监督的 nnU-Net 进行了比较分析。结果表明,与仅使用点提示的 SAM 相比,我们的方法性能更优(IoU 提升约 50%),并且与全监督方法性能相当,同时将标注数据的需求降低了至少一个数量级。