Vision-language models (VLMs) like CLIP have demonstrated impressive zero-shot ability in image classification tasks by aligning text and images but suffer inferior performance compared with task-specific expert models. On the contrary, expert models excel in their specialized domains but lack zero-shot ability for new tasks. How to obtain both the high performance of expert models and zero-shot ability is an important research direction. In this paper, we attempt to demonstrate that by constructing a model hub and aligning models with their functionalities using model labels, new tasks can be solved in a zero-shot manner by effectively selecting and reusing models in the hub. We introduce a novel paradigm, Model Label Learning (MLL), which bridges the gap between models and their functionalities through a Semantic Directed Acyclic Graph (SDAG) and leverages an algorithm, Classification Head Combination Optimization (CHCO), to select capable models for new tasks. Compared with the foundation model paradigm, it is less costly and more scalable, i.e., the zero-shot ability grows with the sizes of the model hub. Experiments on seven real-world datasets validate the effectiveness and efficiency of MLL, demonstrating that expert models can be effectively reused for zero-shot tasks. Our code will be released publicly.
翻译:视觉语言模型(如CLIP)通过对齐文本与图像,在图像分类任务中展现出卓越的零样本能力,但其性能仍逊于特定任务的专家模型。相反,专家模型在其专业领域表现优异,却缺乏应对新任务的零样本能力。如何同时获得专家模型的高性能与零样本能力,是一个重要的研究方向。本文试图证明,通过构建模型库并利用模型标签将模型与其功能对齐,能够通过有效选择和复用库中模型,以零样本方式解决新任务。我们提出一种新颖范式——模型标签学习(MLL),它通过语义有向无环图(SDAG)弥合模型与其功能之间的鸿沟,并利用分类头组合优化(CHCO)算法为新任务选择适配模型。与基础模型范式相比,该方法成本更低、可扩展性更强——即零样本能力随模型库规模增长而提升。在七个真实数据集上的实验验证了MLL的有效性与效率,表明专家模型可被有效复用于零样本任务。我们的代码将公开释放。