3D part segmentation is a crucial and challenging task in 3D perception, playing a vital role in applications such as robotics, 3D generation, and 3D editing. Recent methods harness the powerful Vision Language Models (VLMs) for 2D-to-3D knowledge distillation, achieving zero-shot 3D part segmentation. However, these methods are limited by their reliance on text prompts, which restricts the scalability to large-scale unlabeled datasets and the flexibility in handling part ambiguities. In this work, we introduce SAMPart3D, a scalable zero-shot 3D part segmentation framework that segments any 3D object into semantic parts at multiple granularities, without requiring predefined part label sets as text prompts. For scalability, we use text-agnostic vision foundation models to distill a 3D feature extraction backbone, allowing scaling to large unlabeled 3D datasets to learn rich 3D priors. For flexibility, we distill scale-conditioned part-aware 3D features for 3D part segmentation at multiple granularities. Once the segmented parts are obtained from the scale-conditioned part-aware 3D features, we use VLMs to assign semantic labels to each part based on the multi-view renderings. Compared to previous methods, our SAMPart3D can scale to the recent large-scale 3D object dataset Objaverse and handle complex, non-ordinary objects. Additionally, we contribute a new 3D part segmentation benchmark to address the lack of diversity and complexity of objects and parts in existing benchmarks. Experiments show that our SAMPart3D significantly outperforms existing zero-shot 3D part segmentation methods, and can facilitate various applications such as part-level editing and interactive segmentation.
翻译:三维部件分割是三维感知中一项关键且具有挑战性的任务,在机器人学、三维生成与三维编辑等应用中发挥着至关重要的作用。现有方法利用强大的视觉语言模型进行二维到三维的知识蒸馏,实现了零样本三维部件分割。然而,这些方法受限于对文本提示的依赖,这限制了其在大规模无标注数据集上的可扩展性以及处理部件歧义时的灵活性。本文提出SAMPart3D,一个可扩展的零样本三维部件分割框架,能够将任意三维物体按多粒度分割成语义部件,而无需预定义部件标签集作为文本提示。为实现可扩展性,我们使用与文本无关的视觉基础模型来蒸馏三维特征提取骨干网络,从而能够扩展到大规模无标注三维数据集以学习丰富的三维先验知识。为实现灵活性,我们蒸馏出尺度条件化的部件感知三维特征,用于多粒度三维部件分割。一旦从尺度条件化的部件感知三维特征中获得分割部件,我们便利用视觉语言模型基于多视角渲染结果为每个部件分配语义标签。与先前方法相比,我们的SAMPart3D能够扩展到近期的大规模三维物体数据集Objaverse,并处理复杂的非普通物体。此外,我们贡献了一个新的三维部件分割基准数据集,以解决现有基准数据集中物体与部件多样性和复杂性不足的问题。实验表明,我们的SAMPart3D显著优于现有的零样本三维部件分割方法,并能促进部件级编辑和交互式分割等多种应用。