Foundation models like the Segment Anything Model (SAM) show strong generalization, yet adapting them to medical images remains difficult due to domain shift, scarce labels, and the inability of Parameter-Efficient Fine-Tuning (PEFT) to exploit unlabeled data. While conventional models like U-Net excel in semi-supervised medical learning, their potential to assist a PEFT SAM has been largely overlooked. We introduce SC-SAM, a specialist-generalist framework where U-Net provides point-based prompts and pseudo-labels to guide SAM's adaptation, while SAM serves as a powerful generalist supervisor to regularize U-Net. This reciprocal guidance forms a bidirectional co-training loop that allows both models to effectively exploit the unlabeled data. Across prostate MRI and polyp segmentation benchmarks, our method achieves state-of-the-art results, outperforming other existing semi-supervised SAM variants and even medical foundation models like MedSAM, highlighting the value of specialist-generalist cooperation for label-efficient medical image segmentation. Our code is available at https://github.com/vnlvi2k3/SC-SAM.
翻译:以Segment Anything Model (SAM)为代表的基础模型展现出强大的泛化能力,但由于领域偏移、标注稀缺以及参数高效微调(PEFT)方法无法利用未标注数据,将其适配至医学图像领域仍具挑战。尽管U-Net等传统模型在半监督医学图像学习中表现出色,其辅助PEFT版SAM的潜力长期被忽视。我们提出SC-SAM——一种专家-通才协作框架:U-Net通过点提示与伪标注引导SAM的领域适应,同时SAM作为强大的通才监督器对U-Net进行正则化。这种双向指导形成了协同训练循环,使两种模型能充分挖掘未标注数据的价值。在前列腺MRI与息肉分割基准测试中,本方法取得了最先进的性能,超越了现有半监督SAM变体及MedSAM等医学基础模型,彰显了专家-通才协作在标签高效医学图像分割中的重要意义。代码已开源:https://github.com/vnlvi2k3/SC-SAM。