We propose LangHOPS, the first Multimodal Large Language Model (MLLM) based framework for open-vocabulary object-part instance segmentation. Given an image, LangHOPS can jointly detect and segment hierarchical object and part instances from open-vocabulary candidate categories. Unlike prior approaches that rely on heuristic or learnable visual grouping, our approach grounds object-part hierarchies in language space. It integrates the MLLM into the object-part parsing pipeline to leverage its rich knowledge and reasoning capabilities, and link multi-granularity concepts within the hierarchies. We evaluate LangHOPS across multiple challenging scenarios, including in-domain and cross-dataset object-part instance segmentation, and zero-shot semantic segmentation. LangHOPS achieves state-of-the-art results, surpassing previous methods by 5.5% Average Precision (AP) (in-domain) and 4.8% (cross-dataset) on the PartImageNet dataset and by 2.5% mIOU on unseen object parts in ADE20K (zero-shot). Ablation studies further validate the effectiveness of the language-grounded hierarchy and MLLM driven part query refinement strategy. The code will be released here.
翻译:我们提出LangHOPS,首个基于多模态大语言模型(MLLM)的开放词汇物体部件实例分割框架。给定一张图像,LangHOPS能够从开放词汇候选类别中联合检测并分割层次化的物体及部件实例。与以往依赖启发式或可学习视觉分组的方法不同,我们的方法将物体-部件层次结构建立在语言空间中。该框架将MLLM集成到物体-部件解析流程中,利用其丰富的知识储备与推理能力,并建立层次结构内多粒度概念间的关联。我们在多个挑战性场景下评估LangHOPS,包括领域内与跨数据集物体部件实例分割,以及零样本语义分割。LangHOPS取得了最先进的性能:在PartImageNet数据集上分别以5.5%平均精度(领域内)和4.8%(跨数据集)超越先前方法,在ADE20K未见物体部件上以2.5% mIOU实现领先。消融实验进一步验证了语言锚定层次结构与MLLM驱动的部件查询优化策略的有效性。代码将发布于此处。