Open-vocabulary 3D scene understanding presents a significant challenge in the field. Recent advancements have sought to transfer knowledge embedded in vision language models from the 2D domain to 3D domain. However, these approaches often require learning prior knowledge from specific 3D scene datasets, which limits their applicability in open-world scenarios. The Segment Anything Model (SAM) has demonstrated remarkable zero-shot segmentation capabilities, prompting us to investigate its potential for comprehending 3D scenes without the need for training. In this paper, we introduce OV-SAM3D, a universal framework for open-vocabulary 3D scene understanding. This framework is designed to perform understanding tasks for any 3D scene without requiring prior knowledge of the scene. Specifically, our method is composed of two key sub-modules: First, we initiate the process by generating superpoints as the initial 3D prompts and refine these prompts using segment masks derived from SAM. Moreover, we then integrate a specially designed overlapping score table with open tags from the Recognize Anything Model (RAM) to produce final 3D instances with open-world label. Empirical evaluations conducted on the ScanNet200 and nuScenes datasets demonstrate that our approach surpasses existing open-vocabulary methods in unknown open-world environments.
翻译:开放词汇三维场景理解是该领域的一项重大挑战。近期研究进展致力于将视觉语言模型中蕴含的知识从二维领域迁移至三维领域。然而,这些方法通常需要从特定三维场景数据集中学习先验知识,这限制了其在开放世界场景中的适用性。Segment Anything Model (SAM) 已展现出卓越的零样本分割能力,促使我们探索其在无需训练的情况下理解三维场景的潜力。本文提出OV-SAM3D——一个用于开放词汇三维场景理解的通用框架。该框架旨在无需场景先验知识的情况下,对任意三维场景执行理解任务。具体而言,我们的方法由两个关键子模块构成:首先,我们通过生成超点作为初始三维提示,并利用SAM衍生的分割掩码对这些提示进行优化。此外,我们进一步将特别设计的重叠分数表与Recognize Anything Model (RAM)的开放标签相结合,以生成带有开放世界标签的最终三维实例。在ScanNet200和nuScenes数据集上进行的实证评估表明,在未知的开放世界环境中,我们的方法超越了现有的开放词汇方法。