Recent adaptations can boost the low-shot capability of Contrastive Vision-Language Pre-training (CLIP) by effectively facilitating knowledge transfer. However, these adaptation methods are usually operated on the global view of an input image, and thus biased perception of partial local details of the image. To solve this problem, we propose a Visual Content Refinement (VCR) before the adaptation calculation during the test stage. Specifically, we first decompose the test image into different scales to shift the feature extractor's attention to the details of the image. Then, we select the image view with the max prediction margin in each scale to filter out the noisy image views, where the prediction margins are calculated from the pre-trained CLIP model. Finally, we merge the content of the aforementioned selected image views based on their scales to construct a new robust representation. Thus, the merged content can be directly used to help the adapter focus on both global and local parts without any extra training parameters. We apply our method to 3 popular low-shot benchmark tasks with 13 datasets and achieve a significant improvement over state-of-the-art methods. For example, compared to the baseline (Tip-Adapter) on the few-shot classification task, our method achieves about 2\% average improvement for both training-free and training-need settings.
翻译:最近的适应方法通过有效促进知识迁移,能够提升对比视觉语言预训练(CLIP)的低样本能力。然而,这些适应方法通常基于输入图像的全局视图进行操作,因此对图像局部细节的感知存在偏差。为解决此问题,我们提出在测试阶段的适应计算前进行视觉内容精炼(VCR)。具体而言,我们首先将测试图像分解至不同尺度,以将特征提取器的注意力转移至图像细节。随后,我们在每个尺度中选择具有最大预测边界的图像视图以滤除噪声视图,其中预测边界由预训练的CLIP模型计算得出。最后,我们基于尺度合并上述选定图像视图的内容,以构建新的鲁棒表示。因此,合并后的内容可直接用于帮助适配器同时关注全局与局部部分,无需任何额外训练参数。我们将该方法应用于包含13个数据集的3个主流低样本基准任务,相比现有最优方法取得了显著提升。例如,在少样本分类任务中,相较于基线方法(Tip-Adapter),我们的方法在免训练和需训练两种设置下均实现了约2%的平均性能提升。