While the Segment Anything Model (SAM) has achieved remarkable success in image segmentation, its direct application to medical imaging remains hindered by fundamental challenges, including ambiguous boundaries, insufficient modeling of anatomical relationships, and the absence of uncertainty quantification. To address these limitations, we introduce KG-SAM, a knowledge-guided framework that synergistically integrates anatomical priors with boundary refinement and uncertainty estimation. Specifically, KG-SAM incorporates (i) a medical knowledge graph to encode fine-grained anatomical relationships, (ii) an energy-based Conditional Random Field (CRF) to enforce anatomically consistent predictions, and (iii) an uncertainty-aware fusion module to enhance reliability in high-stakes clinical scenarios. Extensive experiments across multi-center medical datasets demonstrate the effectiveness of our approach: KG-SAM achieves an average Dice score of 82.69% on prostate segmentation and delivers substantial gains in abdominal segmentation, reaching 78.05% on MRI and 79.68% on CT. These results establish KG-SAM as a robust and generalizable framework for advancing medical image segmentation.
翻译:尽管Segment Anything Model(SAM)在图像分割领域取得了显著成功,但其在医学影像中的直接应用仍受限于若干根本性挑战,包括模糊边界、解剖关系建模不足以及不确定性量化的缺失。为应对这些局限性,我们提出了KG-SAM,一个知识引导的框架,能够协同整合解剖学先验知识、边界细化与不确定性估计。具体而言,KG-SAM融合了(i)一个用于编码细粒度解剖关系的医学知识图谱,(ii)一个基于能量的条件随机场(CRF)以确保解剖学上一致的预测,以及(iii)一个不确定性感知融合模块,以提升高风险临床场景下的可靠性。在多中心医学数据集上进行的大量实验证明了我们方法的有效性:KG-SAM在前列腺分割任务中取得了平均Dice分数82.69%的成绩,并在腹部图像分割任务中实现了显著提升,在MRI上达到78.05%,在CT上达到79.68%。这些结果确立了KG-SAM作为一个鲁棒且可推广的框架,能够推动医学图像分割领域的发展。