We propose OpenVoxel, a training-free algorithm for grouping and captioning sparse voxels for the open-vocabulary 3D scene understanding tasks. Given the sparse voxel rasterization (SVR) model obtained from multi-view images of a 3D scene, our OpenVoxel is able to produce meaningful groups that describe different objects in the scene. Also, by leveraging powerful Vision Language Models (VLMs) and Multi-modal Large Language Models (MLLMs), our OpenVoxel successfully build an informative scene map by captioning each group, enabling further 3D scene understanding tasks such as open-vocabulary segmentation (OVS) or referring expression segmentation (RES). Unlike previous methods, our method is training-free and does not introduce embeddings from a CLIP/BERT text encoder. Instead, we directly proceed with text-to-text search using MLLMs. Through extensive experiments, our method demonstrates superior performance compared to recent studies, particularly in complex referring expression segmentation (RES) tasks. The code will be open.
翻译:我们提出了OpenVoxel,一种用于开放词汇3D场景理解任务的免训练稀疏体素分组与描述算法。给定从3D场景的多视角图像中获得的稀疏体素栅格化(SVR)模型,我们的OpenVoxel能够生成描述场景中不同对象的有意义分组。此外,通过利用强大的视觉语言模型(VLMs)和多模态大语言模型(MLLMs),OpenVoxel成功通过对每个分组进行描述,构建出信息丰富的场景地图,从而支持进一步的3D场景理解任务,如开放词汇分割(OVS)或指代表达式分割(RES)。与先前方法不同,我们的方法是免训练的,并且不引入来自CLIP/BERT文本编码器的嵌入向量。相反,我们直接使用MLLMs进行文本到文本的搜索。通过大量实验,我们的方法相较于近期研究展现出优越的性能,特别是在复杂的指代表达式分割(RES)任务中。代码将开源。