The task of labeling multiple organs for segmentation is a complex and time-consuming process, resulting in a scarcity of comprehensively labeled multi-organ datasets while the emergence of numerous partially labeled datasets. Current methods are inadequate in effectively utilizing the supervised information available from these datasets, thereby impeding the progress in improving the segmentation accuracy. This paper proposes a two-stage multi-organ segmentation method based on mutual learning, aiming to improve multi-organ segmentation performance by complementing information among partially labeled datasets. In the first stage, each partial-organ segmentation model utilizes the non-overlapping organ labels from different datasets and the distinct organ features extracted by different models, introducing additional mutual difference learning to generate higher quality pseudo labels for unlabeled organs. In the second stage, each full-organ segmentation model is supervised by fully labeled datasets with pseudo labels and leverages true labels from other datasets, while dynamically sharing accurate features across different models, introducing additional mutual similarity learning to enhance multi-organ segmentation performance. Extensive experiments were conducted on nine datasets that included the head and neck, chest, abdomen, and pelvis. The results indicate that our method has achieved SOTA performance in segmentation tasks that rely on partial labels, and the ablation studies have thoroughly confirmed the efficacy of the mutual learning mechanism.
翻译:多器官分割标注任务复杂耗时,导致全面标注的多器官数据集稀缺,而部分标注数据集却大量涌现。现有方法难以有效利用这些数据集中的监督信息,从而阻碍了分割精度的提升。本文提出一种基于互学习的双阶段多器官分割方法,旨在通过部分标注数据集间的信息互补来提升多器官分割性能。第一阶段中,各局部器官分割模型利用不同数据集的非重叠器官标注及不同模型提取的差异化器官特征,引入额外的互差异学习机制,为未标注器官生成更高质量的伪标签。第二阶段中,各完整器官分割模型通过带伪标签的全标注数据集进行监督,并利用其他数据集的真实标注,同时在不同模型间动态共享精准特征,引入额外的互相似性学习以增强多器官分割性能。我们在包含头颈、胸部、腹部和骨盆的九个数据集上进行了大量实验。结果表明,本方法在依赖部分标注的分割任务中取得了SOTA性能,消融研究充分验证了互学习机制的有效性。