Interactive 3D point cloud segmentation enables efficient annotation of complex 3D scenes through user-guided prompts. However, current approaches are typically restricted in scope to a single domain (indoor or outdoor), and to a single form of user interaction (either spatial clicks or textual prompts). Moreover, training on multiple datasets often leads to negative transfer, resulting in domain-specific tools that lack generalizability. To address these limitations, we present SNAP (Segment aNything in Any Point cloud), a unified model for interactive 3D segmentation that supports both point-based and text-based prompts across diverse domains. Our approach achieves cross-domain generalizability by training on 7 datasets spanning indoor, outdoor, and aerial environments, while employing domain-adaptive normalization to prevent negative transfer. For text-prompted segmentation, we automatically generate mask proposals without human intervention and match them against CLIP embeddings of textual queries, enabling both panoptic and open-vocabulary segmentation. Extensive experiments demonstrate that SNAP consistently delivers high-quality segmentation results. We achieve state-of-the-art performance on 8 out of 9 zero-shot benchmarks for spatial-prompted segmentation and demonstrate competitive results on all 5 text-prompted benchmarks. These results show that a unified model can match or exceed specialized domain-specific approaches, providing a practical tool for scalable 3D annotation. Project page is at, https://neu-vi.github.io/SNAP/
翻译:交互式三维点云分割通过用户引导的提示实现复杂三维场景的高效标注。然而,现有方法通常局限于单一领域(室内或室外)和单一交互形式(空间点击或文本提示)。此外,在多个数据集上训练常导致负迁移,形成缺乏泛化能力的领域专用工具。为突破这些限制,我们提出SNAP(Segment aNything in Any Point cloud)——一个支持跨领域点基与文本基提示的统一交互式三维分割模型。我们通过在涵盖室内、室外及航空环境的7个数据集上进行训练,并采用领域自适应归一化来防止负迁移,从而实现了跨领域泛化能力。针对文本提示分割,我们自动生成无需人工干预的掩码提案,并将其与文本查询的CLIP嵌入特征进行匹配,实现了全景分割与开放词汇分割。大量实验表明,SNAP能持续提供高质量分割结果:在9个空间提示分割的零样本基准测试中,我们在8个任务上达到最优性能;在所有5个文本提示分割基准测试中均取得具有竞争力的结果。这些结果表明,统一模型能够匹配甚至超越专用领域方法,为可扩展的三维标注提供了实用工具。项目页面详见 https://neu-vi.github.io/SNAP/