Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts, e.g., sliced tomatoes, where the model is learned only from the seen compositions, e.g., sliced potatoes and red tomatoes. Thanks to the prompt tuning on large pre-trained visual language models such as CLIP, recent literature shows impressively better CZSL performance than traditional vision-based methods. However, the key aspects that impact the generalization to unseen compositions, including the diversity and informativeness of class context, and the entanglement between visual primitives, i.e., state and object, are not properly addressed in existing CLIP-based CZSL literature. In this paper, we propose a model by prompting the language-informed distribution, aka., PLID, for the CZSL task. Specifically, the PLID leverages pre-trained large language models (LLM) to (i) formulate the language-informed class distributions which are diverse and informative, and (ii) enhance the compositionality of the class embedding. Moreover, a visual-language primitive decomposition (VLPD) module is proposed to dynamically fuse the classification decisions from the compositional and the primitive space. Orthogonal to the existing literature of soft, hard, or distributional prompts, our method advocates prompting the LLM-supported class distributions, leading to a better zero-shot generalization. Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the PLID to the prior arts. Our code and models are released: https://github.com/Cogito2012/PLID.
翻译:组合式零样本学习(CZSL)任务旨在识别未见过的组合视觉概念,例如“切片的西红柿”,而模型仅从已见过的组合(如“切片的土豆”和“红色的西红柿”)中学习。得益于在CLIP等大规模预训练视觉语言模型上的提示调优,近期研究显示出比传统基于视觉的方法显著更优的CZSL性能。然而,现有基于CLIP的CZSL研究未能妥善处理影响模型向未见组合泛化的关键因素,包括类别上下文的多样性与信息量,以及视觉基元(即状态与物体)之间的纠缠问题。本文提出一种通过提示语言信息分布(简称PLID)的模型来解决CZSL任务。具体而言,PLID利用预训练大语言模型(LLM)实现:(i)构建多样且信息丰富的语言信息类别分布;(ii)增强类别嵌入的组合性。此外,本文提出视觉-语言基元解耦(VLPD)模块,动态融合组合空间与基元空间的分类决策。与现有软提示、硬提示或分布提示的研究思路正交,本方法倡导通过提示LLM支持的类别分布来实现更优的零样本泛化能力。在MIT-States、UT-Zappos和C-GQA数据集上的实验结果表明,PLID模型性能优于现有先进方法。代码与模型已开源:https://github.com/Cogito2012/PLID。