Prompt learning represents a promising method for adapting pre-trained vision-language models (VLMs) to various downstream tasks by learning a set of text embeddings. One challenge inherent to these methods is the poor generalization performance due to the invalidity of the learned text embeddings for unseen tasks. A straightforward approach to bridge this gap is to freeze the text embeddings in prompts, which results in a lack of capacity to adapt VLMs for downstream tasks. To address this dilemma, we propose a paradigm called EnPrompt with a novel External Layer (EnLa). Specifically, we propose a textual external layer and learnable visual embeddings for adapting VLMs to downstream tasks. The learnable external layer is built upon valid embeddings of pre-trained CLIP. This design considers the balance of learning capabilities between the two branches. To align the textual and visual features, we propose a novel two-pronged approach: i) we introduce the optimal transport as the discrepancy metric to align the vision and text modalities, and ii) we introduce a novel strengthening feature to enhance the interaction between these two modalities. Four representative experiments (i.e., base-to-novel generalization, few-shot learning, cross-dataset generalization, domain shifts generalization) across 15 datasets demonstrate that our method outperforms the existing prompt learning method.
翻译:提示学习通过学习一组文本嵌入,为预训练视觉-语言模型(VLMs)适应各类下游任务提供了一种前景广阔的方法。这类方法固有的挑战在于,由于已学习的文本嵌入对未见任务无效,导致泛化性能较差。弥合这一差距的直接方法是冻结提示中的文本嵌入,但这会削弱VLMs适应下游任务的能力。为解决这一困境,我们提出了一种名为EnPrompt的新范式,其核心是一个新颖的外部层(EnLa)。具体而言,我们提出了一个文本外部层和可学习的视觉嵌入,用于使VLMs适应下游任务。可学习的外部层构建于预训练CLIP的有效嵌入之上。该设计考虑了两个分支之间学习能力的平衡。为对齐文本与视觉特征,我们提出了一种新颖的双管齐下方法:i) 引入最优传输作为差异度量来对齐视觉与文本模态;ii) 引入一种新颖的强化特征以增强这两个模态间的交互。在15个数据集上进行的四项代表性实验(即基础到新类泛化、少样本学习、跨数据集泛化、域偏移泛化)表明,我们的方法优于现有的提示学习方法。