Label noise is a critical problem in medical image segmentation, often arising from the inherent difficulty of manual annotation. Models trained on noisy data are prone to overfitting, which degrades their generalization performance. While a number of methods and strategies have been proposed to mitigate noisy labels in the segmentation domain, this area remains largely under-explored. The abstention mechanism has proven effective in classification tasks by enhancing the capabilities of Cross Entropy, yet its potential in segmentation remains unverified. In this paper, we address this gap by introducing a universal and modular abstention framework capable of enhancing the noise-robustness of a diverse range of loss functions. Our framework improves upon prior work with two key components: an informed regularization term to guide abstention behaviour, and a more flexible power-law-based auto-tuning algorithm for the abstention penalty. We demonstrate the framework's versatility by systematically integrating it with three distinct loss functions to create three novel, noise-robust variants: GAC, SAC, and ADS. Experiments on the CaDIS and DSAD medical datasets show our methods consistently and significantly outperform their non-abstaining baselines, especially under high noise levels. This work establishes that enabling models to selectively ignore corrupted samples is a powerful and generalizable strategy for building more reliable segmentation models. Our code is publicly available at https://github.com/wemous/abstention-for-segmentation.
翻译:标签噪声是医学图像分割中的一个关键问题,通常源于手动标注固有的困难性。在噪声数据上训练的模型容易过拟合,从而降低其泛化性能。尽管已有多种方法和策略被提出来缓解分割领域的噪声标签问题,但这一领域在很大程度上仍未得到充分探索。弃权机制通过增强交叉熵的能力,已在分类任务中被证明有效,但其在分割中的潜力尚未得到验证。本文通过引入一个通用且模块化的弃权框架来填补这一空白,该框架能够增强多种损失函数的噪声鲁棒性。我们的框架通过两个关键组件改进了先前的工作:一个用于指导弃权行为的知情正则化项,以及一个更灵活的基于幂律的自适应弃权惩罚调优算法。我们通过将该框架与三种不同的损失函数系统性地集成,创建了三种新颖的噪声鲁棒变体:GAC、SAC和ADS,从而证明了该框架的通用性。在CaDIS和DSAD医学数据集上的实验表明,我们的方法始终显著优于非弃权基线方法,尤其是在高噪声水平下。这项工作证实,使模型能够选择性地忽略损坏样本,是构建更可靠分割模型的一种强大且可泛化的策略。我们的代码公开在https://github.com/wemous/abstention-for-segmentation。