Medical image segmentation is crucial for clinical applications, but it is frequently disrupted by noisy annotations and ambiguous anatomical boundaries, which lead to instability in model training. Existing methods typically rely on global noise assumptions or confidence-based sample selection, which inadequately mitigate the performance degradation caused by annotation noise, especially in challenging boundary regions. To address this issue, we propose MetaDCSeg, a robust framework that dynamically learns optimal pixel-wise weights to suppress the influence of noisy labels while preserving reliable annotations. By explicitly modeling boundary uncertainty through a Dynamic Center Distance (DCD) mechanism, our approach utilizes weighted feature distances for foreground, background, and boundary centers, directing the model's attention toward hard-to-segment pixels near ambiguous boundaries. This strategy enables more precise handling of structural boundaries, which are often overlooked by existing methods, and significantly enhances segmentation performance. Extensive experiments across four benchmark datasets with varying noise levels demonstrate that MetaDCSeg outperforms existing state-of-the-art methods.
翻译:医学图像分割在临床应用中至关重要,但常受噪声标注和模糊解剖边界干扰,导致模型训练不稳定。现有方法通常依赖全局噪声假设或基于置信度的样本选择,难以充分缓解标注噪声导致的性能下降,尤其在具有挑战性的边界区域。为解决此问题,我们提出MetaDCSeg——一种通过动态学习最优像素级权重来抑制噪声标签影响、同时保留可靠标注的鲁棒框架。通过动态中心距离机制显式建模边界不确定性,本方法利用加权特征距离计算前景、背景与边界中心,将模型注意力导向模糊边界附近难以分割的像素。该策略能更精确地处理常被现有方法忽略的结构边界,显著提升分割性能。在四个不同噪声水平的基准数据集上的大量实验表明,MetaDCSeg优于现有最先进方法。