Foundation models like CLIP allow zero-shot transfer on various tasks without additional training data. Yet, the zero-shot performance is less competitive than a fully supervised one. Thus, to enhance the performance, fine-tuning and ensembling are also commonly adopted to better fit the downstream tasks. However, we argue that such prior work has overlooked the inherent biases in foundation models. Due to the highly imbalanced Web-scale training set, these foundation models are inevitably skewed toward frequent semantics, and thus the subsequent fine-tuning or ensembling is still biased. In this study, we systematically examine the biases in foundation models and demonstrate the efficacy of our proposed Generalized Logit Adjustment (GLA) method. Note that bias estimation in foundation models is challenging, as most pre-train data cannot be explicitly accessed like in traditional long-tailed classification tasks. To this end, GLA has an optimization-based bias estimation approach for debiasing foundation models. As our work resolves a fundamental flaw in the pre-training, the proposed GLA demonstrates significant improvements across a diverse range of tasks: it achieves 1.5 pp accuracy gains on ImageNet, an large average improvement (1.4-4.6 pp) on 11 few-shot datasets, 2.4 pp gains on long-tailed classification. Codes are in https://github.com/BeierZhu/GLA.
翻译:像CLIP这样的基础模型允许在各种任务上进行零样本迁移,而无需额外的训练数据。然而,零样本性能仍不如完全监督的方法具有竞争力。因此,为了提升性能,微调和集成学习也常被采用,以更好地适应下游任务。然而,我们认为此类先前工作忽视了基础模型中固有的偏差。由于网络规模训练集的高度不平衡,这些基础模型不可避免地偏向于频繁出现的语义,因此后续的微调或集成学习仍然存在偏差。在本研究中,我们系统地检验了基础模型中的偏差,并证明了我们提出的广义对数调整(GLA)方法的有效性。需要注意的是,基础模型中的偏差估计具有挑战性,因为大多数预训练数据无法像传统长尾分类任务那样被显式访问。为此,GLA采用了一种基于优化的偏差估计方法来消除基础模型的偏差。由于我们的工作解决了预训练中的一个根本缺陷,所提出的GLA在多种任务上均展现出显著改进:在ImageNet上实现了1.5个百分点的准确率提升,在11个少样本数据集上平均提升显著(1.4-4.6个百分点),在长尾分类任务上获得2.4个百分点的增益。代码位于https://github.com/BeierZhu/GLA。