Foundation models like the Segment Anything Model (SAM) show strong generalization, yet adapting them to medical images remains difficult due to domain shift, scarce labels, and the inability of Parameter-Efficient Fine-Tuning (PEFT) to exploit unlabeled data. While conventional models like U-Net excel in semi-supervised medical learning, their potential to assist a PEFT SAM has been largely overlooked. We introduce SC-SAM, a specialist-generalist framework where U-Net provides point-based prompts and pseudo-labels to guide SAM's adaptation, while SAM serves as a powerful generalist supervisor to regularize U-Net. This reciprocal guidance forms a bidirectional co-training loop that allows both models to effectively exploit the unlabeled data. Across prostate MRI and polyp segmentation benchmarks, our method achieves state-of-the-art results, outperforming other existing semi-supervised SAM variants and even medical foundation models like MedSAM, highlighting the value of specialist-generalist cooperation for label-efficient medical image segmentation. Our code is available at https://github.com/vnlvi2k3/SC-SAM.
翻译:以Segment Anything Model(SAM)为代表的基础模型展现出强大的泛化能力,但由于领域偏移、标签稀缺以及参数高效微调(PEFT)无法利用无标签数据,将其适配到医学图像领域仍然面临挑战。尽管U-Net等传统模型在半监督医学图像学习中表现出色,但其辅助PEFT SAM的潜力长期被忽视。我们提出了SC-SAM——一个专家-通才协作框架:U-Net通过提供基于点的提示和伪标签来指导SAM的适配,同时SAM作为强大的通才监督器对U-Net进行正则化。这种双向指导形成了协同训练循环,使两个模型都能有效利用无标签数据。在前列腺MRI和息肉分割基准测试中,我们的方法取得了最先进的性能,超越了其他现有的半监督SAM变体,甚至优于MedSAM等医学基础模型,这凸显了专家-通才协作在标签高效医学图像分割中的价值。我们的代码公开于https://github.com/vnlvi2k3/SC-SAM。