Medical image segmentation is a key task in the imaging workflow, influencing many image-based decisions. Traditional, fully-supervised segmentation models rely on large amounts of labeled training data, typically obtained through manual annotation, which can be an expensive, time-consuming, and error-prone process. This signals a need for accurate, automatic, and annotation-efficient methods of training these models. We propose SAM-Mix, a novel multitask learning framework for medical image segmentation that uses class activation maps produced by an auxiliary classifier to guide the predictions of the semi-supervised segmentation branch, which is based on the SAM framework. Experimental evaluations on the public LiTS dataset confirm the effectiveness of SAM-Mix for simultaneous classification and segmentation of the liver from abdominal computed tomography (CT) scans. When trained for 90% fewer epochs on only 50 labeled 2D slices, representing just 0.04% of the available labeled training data, SAM-Mix achieves a Dice improvement of 5.1% over the best baseline model. The generalization results for SAM-Mix are even more impressive, with the same model configuration yielding a 25.4% Dice improvement on a cross-domain segmentation task. Our code is available at https://github.com/tbwa233/SAM-Mix.
翻译:医学图像分割是影像工作流中的关键任务,影响着众多基于图像的决策。传统的全监督分割模型依赖于大量标注训练数据,这些数据通常通过人工标注获取,是一个昂贵、耗时且易出错的过程。这表明需要一种准确、自动且标注高效的方法来训练这些模型。我们提出了SAM-Mix,一种用于医学图像分割的新型多任务学习框架,该框架利用辅助分类器生成的类激活图来指导基于SAM框架的半监督分割分支的预测。在公开LiTS数据集上的实验评估证实了SAM-Mix在腹部计算机断层扫描(CT)图像中同时进行肝脏分类与分割的有效性。当仅使用50张标注的2D切片(仅占可用标注训练数据的0.04%)进行训练,且训练周期减少90%时,SAM-Mix相比最佳基线模型实现了5.1%的Dice系数提升。SAM-Mix的泛化结果更为显著,相同的模型配置在跨域分割任务上实现了25.4%的Dice系数提升。我们的代码公开于https://github.com/tbwa233/SAM-Mix。