We aim at finetuning a vision-language model without hurting its out-of-distribution (OOD) generalization. We address two types of OOD generalization, i.e., i) domain shift such as natural to sketch images, and ii) zero-shot capability to recognize the category that was not contained in the finetune data. Arguably, the diminished OOD generalization after finetuning stems from the excessively simplified finetuning target, which only provides the class information, such as ``a photo of a [CLASS]''. This is distinct from the process in that CLIP was pretrained, where there is abundant text supervision with rich semantic information. Therefore, we propose to compensate for the finetune process using auxiliary supervision with rich semantic information, which acts as anchors to preserve the OOD generalization. Specifically, two types of anchors are elaborated in our method, including i) text-compensated anchor which uses the images from the finetune set but enriches the text supervision from a pretrained captioner, ii) image-text-pair anchor which is retrieved from the dataset similar to pretraining data of CLIP according to the downstream task, associating with the original CLIP text with rich semantics. Those anchors are utilized as auxiliary semantic information to maintain the original feature space of CLIP, thereby preserving the OOD generalization capabilities. Comprehensive experiments demonstrate that our method achieves in-distribution performance akin to conventional finetuning while attaining new state-of-the-art results on domain shift and zero-shot learning benchmarks.
翻译:我们旨在微调视觉-语言模型时不影响其分布外(OOD)泛化能力。我们处理两类OOD泛化问题:i) 域偏移(例如从自然图像到草图图像),以及 ii) 对微调数据中未包含的类别进行零样本识别。微调后OOD泛化能力减弱的原因在于微调目标过于简化,仅提供类别信息(如“一张[类别]的照片”),这与CLIP预训练过程中具有丰富语义信息的文本监督截然不同。因此,我们提出通过富含语义信息的辅助监督来补偿微调过程,这些辅助信息作为锚点以保持OOD泛化能力。具体而言,我们的方法设计了两种锚点:i) 文本补偿锚点——使用微调集中的图像,但通过预训练图像描述器增强文本监督;ii) 图像-文本对锚点——根据下游任务从类似CLIP预训练数据的数据集中检索获得,并与原始CLIP文本的丰富语义相关联。这些锚点作为辅助语义信息用于维持CLIP的原始特征空间,从而保留OOD泛化能力。综合实验表明,我们的方法在保持与常规微调相当的分布内性能的同时,在域偏移和零样本学习基准上达到了新的最优结果。