Adapting pre-trained models to open classes is a challenging problem in machine learning. Vision-language models fully explore the knowledge of text modality, demonstrating strong zero-shot recognition performance, which is naturally suited for various open-set problems. More recently, some research focuses on fine-tuning such models to downstream tasks. Prompt tuning methods achieved huge improvements by learning context vectors on few-shot data. However, through the evaluation under open-set adaptation setting with the test data including new classes, we find that there exists a dilemma that learned prompts have worse generalization abilities than hand-crafted prompts. In this paper, we consider combining the advantages of both and come up with a test-time prompt tuning approach, which leverages the maximum concept matching (MCM) scores as dynamic weights to generate an input-conditioned prompt for each image during test. Through extensive experiments on 11 different datasets, we show that our proposed method outperforms all comparison methods on average considering both base and new classes. The code is available at https://github.com/gaozhengqing/TTPT
翻译:在机器学习中,将预训练模型适配到开放类别是一个具有挑战性的问题。视觉语言模型充分挖掘了文本模态的知识,展现出强大的零样本识别性能,这使其天然适用于各类开放集问题。最近,一些研究专注于将此类模型微调至下游任务。提示调优方法通过在少量样本数据上学习上下文向量,取得了显著改进。然而,在包含新类别的测试数据所构成的开放集适应设定下进行评估时,我们发现存在一个困境:学习得到的提示泛化能力反而劣于人工设计的提示。本文尝试结合两者的优势,提出一种测试时提示调优方法。该方法利用最大概念匹配(MCM)分数作为动态权重,在测试过程中为每张图像生成输入条件化的提示。通过在11个不同数据集上的广泛实验,我们证明所提出的方法在综合考虑基类与新类别的平均性能上优于所有对比方法。代码已发布于 https://github.com/gaozhengqing/TTPT