Vision-language foundation models (such as CLIP) have recently shown their power in transfer learning, owing to large-scale image-text pre-training. However, target domain data in the downstream tasks can be highly different from the pre-training phase, which makes it hard for such a single model to generalize well. Alternatively, there exists a wide range of expert models that contain diversified vision and/or language knowledge pre-trained on different modalities, tasks, networks, and datasets. Unfortunately, these models are "isolated agents" with heterogeneous structures, and how to integrate their knowledge for generalizing CLIP-like models has not been fully explored. To bridge this gap, we propose a general and concise TransAgent framework, which transports the knowledge of the isolated agents in a unified manner, and effectively guides CLIP to generalize with multi-source knowledge distillation. With such a distinct framework, we flexibly collaborate with 11 heterogeneous agents to empower vision-language foundation models, without further cost in the inference phase. Finally, our TransAgent achieves state-of-the-art performance on 11 visual recognition datasets. Under the same low-shot setting, it outperforms the popular CoOp with around 10% on average, and 20% on EuroSAT which contains large domain shifts.
翻译:视觉-语言基础模型(如CLIP)凭借大规模图文预训练,近期在迁移学习中展现出强大能力。然而,下游任务中的目标域数据可能与预训练阶段存在显著差异,使得此类单一模型难以实现良好泛化。另一方面,存在大量在不同模态、任务、网络架构及数据集上预训练得到的专家模型,它们蕴含多样化的视觉和/或语言知识。遗憾的是,这些模型是结构异构的“孤立智能体”,如何整合其知识以增强CLIP类模型的泛化能力尚未得到充分探索。为填补这一空白,我们提出一种通用而简洁的TransAgent框架,该框架以统一方式迁移孤立智能体的知识,并通过多源知识蒸馏有效指导CLIP实现泛化。基于这一独特框架,我们灵活协同11个异构智能体以增强视觉-语言基础模型,且无需在推理阶段增加额外成本。最终,我们的TransAgent在11个视觉识别数据集上取得了最先进的性能。在相同的少样本设定下,其平均性能超越主流方法CoOp约10%,在存在较大域偏移的EuroSAT数据集上优势达到20%。