Few-shot learning (FSL) addresses the challenge of classifying novel classes with limited training samples. While some methods leverage semantic knowledge from smaller-scale models to mitigate data scarcity, these approaches often introduce noise and bias due to the data's inherent simplicity. In this paper, we propose a novel framework, Synergistic Knowledge Transfer (SynTrans), which effectively transfers diverse and complementary knowledge from large multimodal models to empower the off-the-shelf few-shot learner. Specifically, SynTrans employs CLIP as a robust teacher and uses a few-shot vision encoder as a weak student, distilling semantic-aligned visual knowledge via an unsupervised proxy task. Subsequently, a training-free synergistic knowledge mining module facilitates collaboration among large multimodal models to extract high-quality semantic knowledge. Building upon this, a visual-semantic bridging module enables bi-directional knowledge transfer between visual and semantic spaces, transforming explicit visual and implicit semantic knowledge into category-specific classifier weights. Finally, SynTrans introduces a visual weight generator and a semantic weight reconstructor to adaptively construct optimal multimodal FSL classifiers. Experimental results on four FSL datasets demonstrate that SynTrans, even when paired with a simple few-shot vision encoder, significantly outperforms current state-of-the-art methods.
翻译:少样本学习(FSL)旨在解决训练样本有限条件下对新类别进行分类的挑战。现有方法虽常利用小规模模型的语义知识缓解数据稀缺问题,但由于数据本身固有的简单性,这些方法往往会引入噪声与偏差。本文提出一种新颖的框架——协同知识迁移(SynTrans),该框架能有效迁移来自大规模多模态模型的多样化互补知识,从而增强现成少样本学习器的性能。具体而言,SynTrans采用CLIP作为强教师模型,并利用少样本视觉编码器作为弱学生模型,通过无监督代理任务蒸馏语义对齐的视觉知识。随后,一个免训练的协同知识挖掘模块促进多个大规模多模态模型间的协作,以提取高质量的语义知识。在此基础上,视觉-语义桥接模块实现了视觉空间与语义空间的双向知识迁移,将显式视觉知识与隐式语义知识转化为类别特定的分类器权重。最后,SynTrans引入视觉权重生成器与语义权重重构器,以自适应地构建最优的多模态少样本学习分类器。在四个少样本学习数据集上的实验结果表明,即使搭配简单的少样本视觉编码器,SynTrans仍显著优于当前最先进的方法。