Automatic medical image segmentation is a fundamental step in computer-aided diagnosis, yet fully supervised approaches demand extensive pixel-level annotations that are costly and time-consuming. To alleviate this burden, we propose a weakly supervised segmentation framework that leverages only four extreme points as annotation. Specifically, bounding boxes derived from the extreme points are used as prompts for the Segment Anything Model 2 (SAM2) to generate reliable initial pseudo labels. These pseudo labels are progressively refined by an enhanced Feature-Guided Extreme Point Masking (FGEPM) algorithm, which incorporates Monte Carlo dropout-based uncertainty estimation to construct a unified gradient uncertainty cost map for boundary tracing. Furthermore, a dual-branch Uncertainty-aware Scale Consistency (USC) loss and a box alignment loss are introduced to ensure spatial consistency and precise boundary alignment during training. Extensive experiments on two public ultrasound datasets, BUSI and UNS, demonstrate that our method achieves performance comparable to, and even surpassing fully supervised counterparts while significantly reducing annotation cost. These results validate the effectiveness and practicality of the proposed weakly supervised framework for ultrasound image segmentation.
翻译:自动医学图像分割是计算机辅助诊断的基础步骤,然而全监督方法需要大量像素级标注,成本高昂且耗时。为减轻这一负担,我们提出一种弱监督分割框架,仅利用四个极值点作为标注。具体而言,从极值点导出的边界框被用作Segment Anything Model 2 (SAM2)的提示,以生成可靠的初始伪标签。这些伪标签通过增强的特征引导极值点掩蔽(FGEPM)算法逐步优化,该算法融合了基于蒙特卡洛丢弃的不确定性估计,构建统一的梯度不确定性代价图用于边界追踪。此外,我们引入了双分支不确定性感知尺度一致性(USC)损失和边界框对齐损失,以确保训练过程中的空间一致性和精确边界对齐。在BUSI和UNS两个公开超声数据集上的大量实验表明,我们的方法取得了与全监督方法相当甚至更优的性能,同时显著降低了标注成本。这些结果验证了所提出的弱监督超声图像分割框架的有效性和实用性。