The Segment Anything Model (SAM) has recently emerged as a groundbreaking foundation model for prompt-driven image segmentation tasks. However, both the original SAM and its medical variants require slice-by-slice manual prompting of target structures, which directly increase the burden for applications. Despite attempts of auto-prompting to turn SAM into a fully automatic manner, it still exhibits subpar performance and lacks of reliability especially in the field of medical imaging. In this paper, we propose UR-SAM, an uncertainty rectified SAM framework to enhance the reliability for auto-prompting medical image segmentation. Building upon a localization framework for automatic prompt generation, our method incorporates a prompt augmentation module to obtain a series of input prompts for SAM for uncertainty estimation and an uncertainty-based rectification module to further utilize the distribution of estimated uncertainty to improve the segmentation performance. Extensive experiments on two public 3D medical datasets covering the segmentation of 35 organs demonstrate that without supplementary training or fine-tuning, our method further improves the segmentation performance with up to 10.7 % and 13.8 % in dice similarity coefficient, demonstrating efficiency and broad capabilities for medical image segmentation without manual prompting.
翻译:Segment Anything模型(SAM)近期作为提示驱动图像分割任务的突破性基础模型而兴起。然而,无论是原始SAM还是其医学变体,均需逐切片手动标注目标结构,这直接增加了应用负担。尽管自动提示技术尝试将SAM转化为全自动模式,但在医学成像领域仍表现欠佳且缺乏可靠性。本文提出UR-SAM——一种不确定性校正SAM框架,旨在增强自动提示医学图像分割的可靠性。该方法基于自动提示生成的定位框架,通过提示增强模块获取SAM的系列输入提示以进行不确定性估计,并利用不确定性校正模块进一步基于估计的不确定性分布提升分割性能。在涵盖35个器官分割任务的两个公开3D医学数据集上的大量实验表明,无需额外训练或微调,本方法可使分割性能进一步提升(Dice相似系数最高提升10.7%与13.8%),证明了无需手动提示即可实现高效且广泛的医学图像分割能力。