In this paper, we present a novel algorithm for probabilistically updating and rasterizing semantic maps within 3D Gaussian Splatting (3D-GS). Although previous methods have introduced algorithms which learn to rasterize features in 3D-GS for enhanced scene understanding, 3D-GS can fail without warning which presents a challenge for safety-critical robotic applications. To address this gap, we propose a method which advances the literature of continuous semantic mapping from voxels to ellipsoids, combining the precise structure of 3D-GS with the ability to quantify uncertainty of probabilistic robotic maps. Given a set of images, our algorithm performs a probabilistic semantic update directly on the 3D ellipsoids to obtain an expectation and variance through the use of conjugate priors. We also propose a probabilistic rasterization which returns per-pixel segmentation predictions with quantifiable uncertainty. We compare our method with similar probabilistic voxel-based methods to verify our extension to 3D ellipsoids, and perform ablation studies on uncertainty quantification and temporal smoothing.
翻译:本文提出了一种新颖算法,用于在3D高斯溅射(3D-GS)框架内以概率方式更新和栅格化语义地图。尽管已有方法提出了学习在3D-GS中栅格化特征以增强场景理解的算法,但3D-GS可能在无预警的情况下失效,这对安全关键型机器人应用构成了挑战。为填补这一空白,我们提出了一种将连续语义建图方法从体素拓展至椭球体的新方法,将3D-GS的精确结构与概率机器人地图的不确定性量化能力相结合。给定一组图像,我们的算法直接在三维椭球体上执行概率语义更新,通过共轭先验的使用获得期望值与方差。我们还提出了一种概率栅格化方法,能够返回具有可量化不确定性的逐像素分割预测。我们将本方法与类似的基于概率体素的方法进行比较,以验证其向三维椭球体拓展的有效性,并对不确定性量化和时间平滑性进行了消融实验研究。