Large vision language models, such as CLIP, demonstrate impressive robustness to spurious features than single-modal models trained on ImageNet. However, existing test datasets are typically curated based on ImageNet-trained models, which aim to capture the spurious features inherited in ImageNet. Benchmarking CLIP models based on the ImageNet-oriented spurious features may not be sufficient to reflect the extent to which CLIP models are robust to spurious correlations within CLIP training data, e.g., LAION. To this end, we craft a new challenging dataset named CounterAnimal designed to reveal the reliance of CLIP models on realistic spurious features. Specifically, we split animal photos into groups according to the backgrounds, and then identify a pair of groups for each class where a CLIP model shows high-performance drops across the two groups. Our evaluations show that the spurious features captured by CounterAnimal are generically learned by CLIP models with different backbones and pre-train data, yet have limited influence for ImageNet models. We provide theoretical insights that the CLIP objective cannot offer additional robustness. Furthermore, we also re-evaluate strategies such as scaling up parameters and high-quality pre-trained data. We find that they still help mitigate the spurious features, providing a promising path for future developments.
翻译:大型视觉语言模型(如CLIP)相较于在ImageNet上训练的单模态模型,展现出对虚假特征更强的鲁棒性。然而,现有的测试数据集通常基于ImageNet训练的模型构建,旨在捕捉ImageNet中固有的虚假特征。基于面向ImageNet的虚假特征对CLIP模型进行基准测试,可能不足以反映CLIP模型对其训练数据(如LAION)中虚假相关性的鲁棒程度。为此,我们构建了一个名为CounterAnimal的新挑战性数据集,旨在揭示CLIP模型对现实虚假特征的依赖性。具体而言,我们根据背景将动物照片分组,然后为每个类别识别一对组,其中CLIP模型在两个组间表现出显著的性能下降。我们的评估表明,CounterAnimal捕捉到的虚假特征被不同骨干网络和预训练数据的CLIP模型普遍学习,但对ImageNet模型的影响有限。我们从理论上指出,CLIP目标函数无法提供额外的鲁棒性。此外,我们还重新评估了扩大参数量和使用高质量预训练数据等策略。我们发现这些策略仍有助于缓解虚假特征的影响,为未来发展提供了有希望的路径。