Fine-tuned vision-language models (VLMs) often capture spurious correlations between image features and textual attributes, resulting in degraded zero-shot performance at test time. Existing approaches for addressing spurious correlations (i) primarily operate at the global image-level rather than intervening directly on fine-grained image features and (ii) are predominantly designed for unimodal settings. In this work, we present RaVL, which takes a fine-grained perspective on VLM robustness by discovering and mitigating spurious correlations using local image features rather than operating at the global image level. Given a fine-tuned VLM, RaVL first discovers spurious correlations by leveraging a region-level clustering approach to identify precise image features contributing to zero-shot classification errors. Then, RaVL mitigates the identified spurious correlation with a novel region-aware loss function that enables the VLM to focus on relevant regions and ignore spurious relationships during fine-tuning. We evaluate RaVL on 654 VLMs with various model architectures, data domains, and learned spurious correlations. Our results show that RaVL accurately discovers (191% improvement over the closest baseline) and mitigates (8.2% improvement on worst-group image classification accuracy) spurious correlations. Qualitative evaluations on general-domain and medical-domain VLMs confirm our findings.
翻译:微调后的视觉语言模型(VLM)常会捕捉图像特征与文本属性之间的伪相关性,导致在测试时零样本性能下降。现有解决伪相关性的方法(i)主要在全图层面操作,而非直接干预细粒度图像特征;(ii)主要针对单模态场景设计。本研究提出RaVL,该方法从细粒度视角出发,通过利用局部图像特征(而非全图层面)来发现并缓解伪相关性,从而提升VLM的鲁棒性。给定一个微调后的VLM,RaVL首先通过区域级聚类方法识别导致零样本分类错误的具体图像特征,从而发现伪相关性。随后,RaVL采用一种新颖的区域感知损失函数来缓解已识别的伪相关性,该函数使VLM在微调过程中能够聚焦于相关区域并忽略伪关联。我们在654个具有不同模型架构、数据域及所学伪相关性的VLM上评估RaVL。实验结果表明,RaVL能准确发现(相比最佳基线提升191%)并缓解(最差组图像分类准确率提升8.2%)伪相关性。在通用领域和医疗领域VLM上的定性评估进一步验证了我们的发现。