Fundus image classification is crucial in the computer aided diagnosis tasks, but label noise significantly impairs the performance of deep neural networks. To address this challenge, we propose a robust framework, Self-Supervised Pre-training with Robust Adaptive Credal Loss (SSP-RACL), for handling label noise in fundus image datasets. First, we use Masked Autoencoders (MAE) for pre-training to extract features, unaffected by label noise. Subsequently, RACL employ a superset learning framework, setting confidence thresholds and adaptive label relaxation parameter to construct possibility distributions and provide more reliable ground-truth estimates, thus effectively suppressing the memorization effect. Additionally, we introduce clinical knowledge-based asymmetric noise generation to simulate real-world noisy fundus image datasets. Experimental results demonstrate that our proposed method outperforms existing approaches in handling label noise, showing superior performance.
翻译:眼底图像分类在计算机辅助诊断任务中至关重要,但标签噪声会严重损害深度神经网络的性能。为应对这一挑战,我们提出一个鲁棒的框架——基于自监督预训练与鲁棒自适应信度损失的SSP-RACL,用于处理眼底图像数据集中的标签噪声。首先,我们使用掩码自编码器进行预训练以提取特征,该过程不受标签噪声影响。随后,RACL采用超集学习框架,通过设置置信度阈值和自适应标签松弛参数来构建可能性分布并提供更可靠的真实标签估计,从而有效抑制记忆效应。此外,我们引入了基于临床知识的非对称噪声生成方法,以模拟真实世界中的噪声眼底图像数据集。实验结果表明,我们提出的方法在处理标签噪声方面优于现有方法,表现出更优越的性能。