Semi-supervised learning (SSL) commonly exhibits confirmation bias, where models disproportionately favor certain classes, leading to errors in predicted pseudo labels that accumulate under a self-training paradigm. Unlike supervised settings, which benefit from a rich, static data distribution, SSL inherently lacks mechanisms to correct this self-reinforced bias, necessitating debiased interventions at each training step. Although the generation of debiased pseudo labels has been extensively studied, their effective utilization remains underexplored. Our analysis indicates that data from biased classes should have a reduced influence on parameter updates, while more attention should be given to underrepresented classes. To address these challenges, we introduce TaMatch, a unified framework for debiased training in SSL. TaMatch employs a scaling ratio derived from both a prior target distribution and the model's learning status to estimate and correct bias at each training step. This ratio adjusts the raw predictions on unlabeled data to produce debiased pseudo labels. In the utilization phase, these labels are differently weighted according to their predicted class, enhancing training equity and minimizing class bias. Additionally, TaMatch dynamically adjust the target distribution in response to the model's learning progress, facilitating robust handling of practical scenarios where the prior distribution is unknown. Empirical evaluations show that TaMatch significantly outperforms existing state-of-the-art methods across a range of challenging image classification tasks, highlighting the critical importance of both the debiased generation and utilization of pseudo labels in SSL.
翻译:半监督学习(SSL)通常表现出确认偏误,即模型不成比例地偏好某些类别,导致在自训练范式下预测伪标签的错误不断累积。与受益于丰富、静态数据分布的有监督设置不同,SSL本质上缺乏纠正这种自我强化偏误的机制,因此需要在每个训练步骤进行去偏干预。尽管去偏伪标签的生成已得到广泛研究,但其有效利用仍探索不足。我们的分析表明,来自偏置类别的数据对参数更新的影响应被削弱,而应对代表性不足的类别给予更多关注。为应对这些挑战,我们提出了TaMatch,一个用于SSL中去偏训练的统一框架。TaMatch利用从先验目标分布和模型学习状态导出的缩放比率,来估计并校正每个训练步骤中的偏误。该比率通过调整对未标记数据的原始预测来生成去偏伪标签。在利用阶段,这些标签根据其预测类别被差异化加权,从而提升训练公平性并最小化类别偏误。此外,TaMatch会根据模型的学习进度动态调整目标分布,以促进在先验分布未知的实际场景中进行鲁棒处理。实证评估表明,TaMatch在一系列具有挑战性的图像分类任务上显著优于现有的最先进方法,凸显了在SSL中同时实现伪标签的去偏生成与有效利用的至关重要性。