This paper introduces ProLab, a novel approach using property-level label space for creating strong interpretable segmentation models. Instead of relying solely on category-specific annotations, ProLab uses descriptive properties grounded in common sense knowledge for supervising segmentation models. It is based on two core designs. First, we employ Large Language Models (LLMs) and carefully crafted prompts to generate descriptions of all involved categories that carry meaningful common sense knowledge and follow a structured format. Second, we introduce a description embedding model preserving semantic correlation across descriptions and then cluster them into a set of descriptive properties (e.g., 256) using K-Means. These properties are based on interpretable common sense knowledge consistent with theories of human recognition. We empirically show that our approach makes segmentation models perform stronger on five classic benchmarks (e.g., ADE20K, COCO-Stuff, Pascal Context, Cityscapes, and BDD). Our method also shows better scalability with extended training steps than category-level supervision. Our interpretable segmentation framework also emerges with the generalization ability to segment out-of-domain or unknown categories using only in-domain descriptive properties. Code is available at https://github.com/lambert-x/ProLab.
翻译:本文提出ProLab,一种利用属性级标签空间构建强可解释分割模型的新方法。不同于仅依赖类别特定标注,ProLab采用基于常识知识的描述性属性来监督分割模型。该方法基于两个核心设计:首先,我们利用大语言模型(LLMs)和精心设计的提示词,生成所有相关类别的描述,这些描述承载有意义的常识知识并遵循结构化格式;其次,我们引入一种保持描述间语义相关性的描述嵌入模型,随后通过K-Means算法将其聚类为一组描述性属性(例如256个)。这些属性基于符合人类识别理论的可解释常识知识。实验表明,我们的方法使分割模型在五个经典基准数据集(包括ADE20K、COCO-Stuff、Pascal Context、Cityscapes和BDD)上表现更优。与类别级监督相比,我们的方法在扩展训练步骤时也展现出更好的可扩展性。我们的可解释分割框架还展现出泛化能力,能够仅使用域内描述性属性分割域外或未知类别。代码发布于https://github.com/lambert-x/ProLab。