Self-training methods have proven to be effective in exploiting abundant unlabeled data in semi-supervised learning, particularly when labeled data is scarce. While many of these approaches rely on a cross-entropy loss function (CE), recent advances have shown that the supervised contrastive loss function (SupCon) can be more effective. Additionally, unsupervised contrastive learning approaches have also been shown to capture high quality data representations in the unsupervised setting. To benefit from these advantages in a semi-supervised setting, we propose a general framework to enhance self-training methods, which replaces all instances of CE losses with a unique contrastive loss. By using class prototypes, which are a set of class-wise trainable parameters, we recover the probability distributions of the CE setting and show a theoretical equivalence with it. Our framework, when applied to popular self-training methods, results in significant performance improvements across three different datasets with a limited number of labeled data. Additionally, we demonstrate further improvements in convergence speed, transfer ability, and hyperparameter stability. The code is available at \url{https://github.com/AurelienGauffre/semisupcon/}.
翻译:自训练方法在半监督学习中已被证明能有效利用大量未标注数据,尤其在标注数据稀缺时。尽管许多此类方法依赖于交叉熵损失函数(CE),但近期研究表明监督对比损失函数(SupCon)可能更具优势。此外,无监督对比学习方法也被证实在无监督场景下能捕获高质量的数据表示。为在半监督场景中综合这些优势,我们提出一个增强自训练方法的通用框架,该框架将所有交叉熵损失替换为一种独特的对比损失。通过使用类别原型(一组可训练的类级别参数),我们恢复了交叉熵设置中的概率分布,并证明了二者在理论上的等价性。当应用于主流自训练方法时,我们的框架在三个不同数据集上仅使用有限标注数据即实现了显著的性能提升。此外,我们进一步展示了该方法在收敛速度、迁移能力和超参数稳定性方面的改进。代码发布于 \url{https://github.com/AurelienGauffre/semisupcon/}。