Deep supervised learning has achieved remarkable success across a wide range of tasks, yet it remains susceptible to overfitting when confronted with noisy labels. To address this issue, noise-robust loss functions offer an effective solution for enhancing learning in the presence of label noise. In this work, we systematically investigate the limitation of the recently proposed Active Passive Loss (APL), which employs Mean Absolute Error (MAE) as its passive loss function. Despite the robustness brought by MAE, one of its key drawbacks is that it pays equal attention to clean and noisy samples; this feature slows down convergence and potentially makes training difficult, particularly in large-scale datasets. To overcome these challenges, we introduce a novel loss function class, termed Normalized Negative Loss Functions (NNLFs), which serve as passive loss functions within the APL framework. NNLFs effectively address the limitations of MAE by concentrating more on memorized clean samples. By replacing MAE in APL with our proposed NNLFs, we enhance APL and present a new framework called Active Negative Loss (ANL). Moreover, in non-symmetric noise scenarios, we propose an entropy-based regularization technique to mitigate the vulnerability to the label imbalance. Extensive experiments demonstrate that the new loss functions adopted by our ANL framework can achieve better or comparable performance to state-of-the-art methods across various label noise types and in image segmentation tasks. The source code is available at: https://github.com/Virusdoll/Active-Negative-Loss.
翻译:深度监督学习已在广泛任务中取得显著成功,但在面对带噪标签时仍易出现过拟合问题。为解决此问题,噪声鲁棒损失函数为增强带噪标签下的学习提供了有效方案。本文系统研究了近期提出的主动被动损失(APL)的局限性,该损失函数采用平均绝对误差(MAE)作为其被动损失函数。尽管MAE带来了鲁棒性,但其关键缺陷之一在于对干净样本与噪声样本给予同等关注;这一特性会减缓收敛速度,并可能使训练过程变得困难,尤其是在大规模数据集中。为克服这些挑战,我们提出了一类新颖的损失函数——归一化负损失函数(NNLFs),作为APL框架中的被动损失函数。NNLFs通过更专注于已记忆的干净样本,有效解决了MAE的局限性。通过将APL中的MAE替换为我们提出的NNLFs,我们增强了APL并提出了名为主动负损失(ANL)的新框架。此外,在非对称噪声场景中,我们提出了一种基于熵的正则化技术以缓解对标签不平衡的敏感性。大量实验表明,我们的ANL框架采用的新损失函数在多种标签噪声类型及图像分割任务中,能够达到优于或媲美现有先进方法的性能。源代码发布于:https://github.com/Virusdoll/Active-Negative-Loss。