Existing methods for segmenting Neural Radiance Fields (NeRFs) are often optimization-based, requiring slow per-scene training that sacrifices the zero-shot capabilities of 2D foundation models. We introduce DivAS (Depth-interactive Voxel Aggregation Segmentation), an optimization-free, fully interactive framework that addresses these limitations. Our method operates via a fast GUI-based workflow where 2D SAM masks, generated from user point prompts, are refined using NeRF-derived depth priors to improve geometric accuracy and foreground-background separation. The core of our contribution is a custom CUDA kernel that aggregates these refined multi-view masks into a unified 3D voxel grid in under 200ms, enabling real-time visual feedback. This optimization-free design eliminates the need for per-scene training. Experiments on Mip-NeRF 360° and LLFF show that DivAS achieves segmentation quality comparable to optimization-based methods, while being 2-2.5x faster end-to-end, and up to an order of magnitude faster when excluding user prompting time.
翻译:现有的神经辐射场(NeRF)分割方法通常基于优化,需要对每个场景进行缓慢的训练,这牺牲了二维基础模型的零样本能力。我们提出了DivAS(深度交互式体素聚合分割),一种无需优化的全交互式框架,以解决这些局限性。我们的方法通过一个快速的基于图形用户界面的工作流程实现:首先由用户点提示生成二维SAM掩码,然后利用NeRF导出的深度先验进行优化,以提高几何精度和前景-背景分离度。我们贡献的核心是一个定制的CUDA内核,它能在200毫秒内将这些优化的多视角掩码聚合到一个统一的三维体素网格中,从而实现实时视觉反馈。这种无需优化的设计消除了对每个场景进行训练的需求。在Mip-NeRF 360°和LLFF数据集上的实验表明,DivAS实现了与基于优化的方法相当的分割质量,同时端到端速度快2-2.5倍,若排除用户提示时间,其速度甚至可快一个数量级。