Despite the recent progress in medical image segmentation with scribble-based annotations, the segmentation results of most models are still not ro-bust and generalizable enough in open environments. Evidential deep learn-ing (EDL) has recently been proposed as a promising solution to model predictive uncertainty and improve the reliability of medical image segmen-tation. However directly applying EDL to scribble-supervised medical im-age segmentation faces a tradeoff between accuracy and reliability. To ad-dress the challenge, we propose a novel framework called Dual-Branch Evi-dential Deep Learning (DuEDL). Firstly, the decoder of the segmentation network is changed to two different branches, and the evidence of the two branches is fused to generate high-quality pseudo-labels. Then the frame-work applies partial evidence loss and two-branch consistent loss for joint training of the model to adapt to the scribble supervision learning. The pro-posed method was tested on two cardiac datasets: ACDC and MSCMRseg. The results show that our method significantly enhances the reliability and generalization ability of the model without sacrificing accuracy, outper-forming state-of-the-art baselines. The code is available at https://github.com/Gardnery/DuEDL.
翻译:尽管基于涂鸦标注的医学图像分割近期取得了进展,但大多数模型在开放环境中的分割结果仍不够鲁棒和泛化。证据深度学习(EDL)作为一种有前景的解决方案被提出,用于建模预测不确定性并提升医学图像分割的可靠性。然而,直接将EDL应用于涂鸦监督的医学图像分割面临准确性与可靠性之间的权衡。为应对这一挑战,我们提出了一种称为双分支证据深度学习(DuEDL)的新框架。首先,将分割网络的解码器改为两个不同的分支,并通过融合两个分支的证据来生成高质量伪标签。随后,该框架采用部分证据损失与双分支一致性损失对模型进行联合训练,以适应涂鸦监督学习。所提方法在两个心脏数据集(ACDC与MSCMRseg)上进行了测试。结果表明,我们的方法在保持准确性的同时,显著增强了模型的可靠性与泛化能力,性能优于现有先进基线。代码发布于 https://github.com/Gardnery/DuEDL。