Pre-trained vision-language models like CLIP have shown powerful zero-shot inference ability via image-text matching and prove to be strong few-shot learners in various downstream tasks. However, in real-world scenarios, adapting CLIP to downstream tasks may encounter the following challenges: 1) data may exhibit long-tailed data distributions and might not have abundant samples for all the classes; 2) There might be emerging tasks with new classes that contain no samples at all. To overcome them, we propose a novel framework to achieve efficient and long-tailed generalization, which can be termed as Candle. During the training process, we propose compensating logit-adjusted loss to encourage large margins of prototypes and alleviate imbalance both within the base classes and between the base and new classes. For efficient adaptation, we treat the CLIP model as a black box and leverage the extracted features to obtain visual and textual prototypes for prediction. To make full use of multi-modal information, we also propose cross-modal attention to enrich the features from both modalities. For effective generalization, we introduce virtual prototypes for new classes to make up for their lack of training images. Candle achieves state-of-the-art performance over extensive experiments on 11 diverse datasets while substantially reducing the training time, demonstrating the superiority of our approach. The source code is available at https://github.com/shijxcs/Candle.
翻译:诸如CLIP等预训练视觉语言模型通过图像-文本匹配展现出强大的零样本推理能力,并在各类下游任务中被证明是优秀的少样本学习器。然而在实际应用场景中,将CLIP适配至下游任务可能面临以下挑战:1)数据可能呈现长尾分布,且并非所有类别都具备充足样本;2)可能出现包含全新类别却完全缺乏样本的增量任务。为应对这些挑战,我们提出名为Candle的新型框架以实现高效长尾泛化。在训练过程中,我们提出补偿性对数调整损失函数,以扩大原型间隔并缓解基类内部及基类-新类间的不平衡问题。为实现高效适配,我们将CLIP模型视为黑箱,利用提取的特征获取视觉与文本原型进行预测。为充分利用多模态信息,我们还提出跨模态注意力机制以增强双模态特征表示。为实现有效泛化,我们为新类别引入虚拟原型以弥补其训练图像的缺失。Candle在11个多样化数据集上的大量实验中取得了最先进的性能,同时显著减少了训练时间,证明了本方法的优越性。源代码发布于https://github.com/shijxcs/Candle。