Medical image segmentation is crucial for clinical applications, but it is frequently disrupted by noisy annotations and ambiguous anatomical boundaries, which lead to instability in model training. Existing methods typically rely on global noise assumptions or confidence-based sample selection, which inadequately mitigate the performance degradation caused by annotation noise, especially in challenging boundary regions. To address this issue, we propose MetaDCSeg, a robust framework that dynamically learns optimal pixel-wise weights to suppress the influence of noisy ground-truth labels while preserving reliable annotations. By explicitly modeling boundary uncertainty through a Dynamic Center Distance (DCD) mechanism, our approach utilizes weighted feature distances for foreground, background, and boundary centers, directing the model's attention toward hard-to-segment pixels near ambiguous boundaries. This strategy enables more precise handling of structural boundaries, which are often overlooked by existing methods, and significantly enhances segmentation performance. Extensive experiments across four benchmark datasets with varying noise levels demonstrate that MetaDCSeg consistently outperforms existing state-of-the-art methods.
翻译:医学图像分割对于临床应用至关重要,但其常受到噪声标注和模糊解剖边界的干扰,导致模型训练不稳定。现有方法通常依赖于全局噪声假设或基于置信度的样本选择,这些方法不足以缓解由标注噪声引起的性能下降,尤其是在具有挑战性的边界区域。为解决此问题,我们提出了MetaDCSeg,一个鲁棒的框架,它动态学习最优的像素级权重,以抑制噪声真实标签的影响,同时保留可靠的标注。通过动态中心距离机制显式建模边界不确定性,我们的方法利用前景、背景和边界中心的加权特征距离,将模型的注意力引导至模糊边界附近难以分割的像素。该策略能够更精确地处理结构边界(这些边界常被现有方法忽视),并显著提升分割性能。在四个具有不同噪声水平的基准数据集上进行的大量实验表明,MetaDCSeg始终优于现有的最先进方法。