3D occupancy prediction provides a comprehensive description of the surrounding scenes and has become an essential task for 3D perception. Most existing methods focus on offline perception from one or a few views and cannot be applied to embodied agents which demands to gradually perceive the scene through progressive embodied exploration. In this paper, we formulate an embodied 3D occupancy prediction task to target this practical scenario and propose a Gaussian-based EmbodiedOcc framework to accomplish it. We initialize the global scene with uniform 3D semantic Gaussians and progressively update local regions observed by the embodied agent. For each update, we extract semantic and structural features from the observed image and efficiently incorporate them via deformable cross-attention to refine the regional Gaussians. Finally, we employ Gaussian-to-voxel splatting to obtain the global 3D occupancy from the updated 3D Gaussians. Our EmbodiedOcc assumes an unknown (i.e., uniformly distributed) environment and maintains an explicit global memory of it with 3D Gaussians. It gradually gains knowledge through local refinement of regional Gaussians, which is consistent with how humans understand new scenes through embodied exploration. We reorganize an EmbodiedOcc-ScanNet benchmark based on local annotations to facilitate the evaluation of the embodied 3D occupancy prediction task. Experiments demonstrate that our EmbodiedOcc outperforms existing local prediction methods and accomplishes the embodied occupancy prediction with high accuracy and strong expandability. Our code is available at: https://github.com/YkiWu/EmbodiedOcc.
翻译:三维占据预测为周围场景提供了全面的描述,并已成为三维感知的关键任务。现有方法大多专注于从一个或少数视角进行离线感知,无法应用于需要通过渐进式具身探索逐步感知场景的具身智能体。本文针对这一实际场景,提出了具身三维占据预测任务,并设计了一种基于高斯分布的EmbodiedOcc框架来实现该任务。我们使用均匀的三维语义高斯分布初始化全局场景,并逐步更新由具身智能体观测到的局部区域。对于每次更新,我们从观测图像中提取语义和结构特征,并通过可变形交叉注意力高效地融合这些特征,以优化区域高斯分布。最后,我们采用高斯到体素的投影方法,从更新后的三维高斯分布中获取全局三维占据。我们的EmbodiedOcc假设环境未知(即均匀分布),并使用三维高斯分布显式维护其全局记忆。该方法通过局部优化区域高斯分布逐步获取知识,这与人类通过具身探索理解新场景的方式一致。我们基于局部标注重构了EmbodiedOcc-ScanNet基准数据集,以促进具身三维占据预测任务的评估。实验表明,我们的EmbodiedOcc优于现有的局部预测方法,并以高精度和强扩展性实现了具身占据预测。代码已开源:https://github.com/YkiWu/EmbodiedOcc。