Vision-language models (VLMs) like CLIP have been cherished for their ability to perform zero-shot visual recognition on open-vocabulary concepts. This is achieved by selecting the object category whose textual representation bears the highest similarity with the query image. While successful in some domains, this method struggles with identifying fine-grained entities as well as generalizing to unseen concepts that are not captured by the training distribution. Recent works attempt to mitigate these challenges by integrating category descriptions at test time, albeit yielding modest improvements. We attribute these limited gains to a fundamental misalignment between image and description representations, which is rooted in the pretraining structure of CLIP. In this paper, we propose GRAIN, a new pretraining strategy aimed at aligning representations at both fine and coarse levels simultaneously. Our approach learns to jointly ground textual descriptions in image regions along with aligning overarching captions with global image representations. To drive this pre-training, we leverage frozen Multimodal Large Language Models (MLLMs) to derive large-scale synthetic annotations. We demonstrate the enhanced zero-shot performance of our model compared to current state-of-the art methods across 11 diverse image classification datasets. Additionally, we introduce Products-2023, a newly curated, manually labeled dataset featuring novel concepts, and showcase our model's ability to recognize these concepts by benchmarking on it. Significant improvements achieved by our model on other downstream tasks like retrieval further highlight the superior quality of representations learned by our approach. Code available at https://github.com/shaunak27/grain-clip .
翻译:像CLIP这样的视觉语言模型因其能够在开放词汇概念上执行零样本视觉识别而备受青睐。该能力通过选择其文本表示与查询图像相似度最高的物体类别来实现。尽管在某些领域取得了成功,但该方法在识别细粒度实体以及泛化到训练分布未涵盖的未见概念方面仍存在困难。近期研究尝试通过在测试时整合类别描述来缓解这些挑战,但改进效果有限。我们将这种有限的提升归因于图像与描述表征之间的根本性错位,其根源在于CLIP的预训练结构。本文提出GRAIN——一种旨在同时实现细粒度与粗粒度表征对齐的新型预训练策略。我们的方法通过联合学习将文本描述锚定于图像区域,同时使整体标题与全局图像表征对齐。为驱动此预训练过程,我们利用冻结的多模态大语言模型生成大规模合成标注。我们在11个不同的图像分类数据集上证明了本模型相较于当前最先进方法具有更强的零样本性能。此外,我们引入了Products-2023——一个包含新颖概念且经过人工标注的新建数据集,并通过在该数据集上的基准测试展示了本模型识别这些概念的能力。本模型在检索等其他下游任务上取得的显著改进,进一步凸显了该方法所学表征的优越性。代码发布于https://github.com/shaunak27/grain-clip。