Vision-Language Models like CLIP create aligned embedding spaces for text and images, making it possible for anyone to build a visual classifier by simply naming the classes they want to distinguish. However, a model that works well in one domain may fail in another, and non-expert users have no straightforward way to assess whether their chosen VLM will work on their problem. We build on prior work using text-only comparisons to evaluate how well a model works for a given natural language task, and explore approaches that also generate synthetic images relevant to that task to evaluate and refine the prediction of zero-shot accuracy. We show that generated imagery to the baseline text-only scores substantially improves the quality of these predictions. Additionally, it gives a user feedback on the kinds of images that were used to make the assessment. Experiments on standard CLIP benchmark datasets demonstrate that the image-based approach helps users predict, without any labeled examples, whether a VLM will be effective for their application.
翻译:诸如CLIP等视觉-语言模型构建了文本与图像的对齐嵌入空间,使得任何用户仅需指定待区分类别名称即可构建视觉分类器。然而,在某一领域表现优异的模型可能在另一领域失效,非专业用户缺乏直接评估所选VLM是否适用于其特定问题的有效途径。本研究基于先前仅通过文本比对评估模型在自然语言任务中表现的工作,进一步探索通过生成任务相关合成图像来评估与优化零样本准确率预测的方法。实验表明,在基线纯文本评分基础上引入生成图像能显著提升预测质量。此外,该方法可向用户反馈用于评估的图像类型信息。在标准CLIP基准数据集上的实验证明,这种基于图像的评估方法能帮助用户在无需标注样本的情况下,有效预测VLM在其应用场景中的适用性。