Recent advances in vision-language foundational models, such as CLIP, have demonstrated significant strides in zero-shot classification. However, the extensive parameterization of models like CLIP necessitates a resource-intensive fine-tuning process. In response, TIP-Adapter and SuS-X have introduced training-free methods aimed at bolstering the efficacy of downstream tasks. While these approaches incorporate support sets to maintain data distribution consistency between knowledge cache and test sets, they often fall short in terms of generalization on the test set, particularly when faced with test data exhibiting substantial distributional variations. In this work, we present CapS-Adapter, an innovative method that employs a caption-based support set, effectively harnessing both image and caption features to exceed existing state-of-the-art techniques in training-free scenarios. CapS-Adapter adeptly constructs support sets that closely mirror target distributions, utilizing instance-level distribution features extracted from multimodal large models. By leveraging CLIP's single and cross-modal strengths, CapS-Adapter enhances predictive accuracy through the use of multimodal support sets. Our method achieves outstanding zero-shot classification results across 19 benchmark datasets, improving accuracy by 2.19\% over the previous leading method. Our contributions are substantiated through extensive validation on multiple benchmark datasets, demonstrating superior performance and robust generalization capabilities. Our code is made publicly available at https://github.com/WLuLi/CapS-Adapter.
翻译:近年来,视觉-语言基础模型(如CLIP)在零样本分类领域取得了显著进展。然而,像CLIP这类模型参数量庞大,通常需要进行资源密集型的微调过程。为此,TIP-Adapter和SuS-X引入了无需训练的方法,旨在提升下游任务的性能。尽管这些方法通过引入支持集来保持知识缓存与测试集之间的数据分布一致性,但它们通常在测试集的泛化能力方面表现不足,尤其是在面对分布差异较大的测试数据时。本文提出CapS-Adapter,一种创新方法,它采用基于字幕的支持集,有效利用图像和字幕特征,在无需训练的场景下超越了现有的最先进技术。CapS-Adapter巧妙地构建了与目标分布高度匹配的支持集,利用了从多模态大模型中提取的实例级分布特征。通过结合CLIP的单模态和跨模态优势,CapS-Adapter借助多模态支持集提升了预测准确性。我们的方法在19个基准数据集上取得了优异的零样本分类结果,准确率较先前领先方法提高了2.19%。我们在多个基准数据集上进行了广泛验证,证明了其卓越的性能和强大的泛化能力,从而证实了本工作的贡献。我们的代码已在https://github.com/WLuLi/CapS-Adapter公开。