Autonomous safe navigation in unstructured and novel environments poses significant challenges, especially when environment information can only be provided through low-cost vision sensors. Although safe reactive approaches have been proposed to ensure robot safety in complex environments, many base their theory off the assumption that the robot has prior knowledge on obstacle locations and geometries. In this paper, we present a real-time, vision-based framework that constructs continuous, first-order differentiable Signed Distance Fields (SDFs) of unknown environments entirely online, without any pre-training, and is fully compatible with established SDF-based reactive controllers. To achieve robust performance under practical sensing conditions, our approach explicitly accounts for noise in affordable RGB-D cameras, refining the neural SDF representation online for smoother geometry and stable gradient estimates. We validate the proposed method in simulation and real-world experiments using a Fetch robot. Videos and supplementary material are available at https://satyajeetburla.github.io/rnbf/.
翻译:在非结构化及未知环境中实现自主安全导航面临重大挑战,尤其当环境信息仅能通过低成本视觉传感器获取时。尽管已有安全反应式方法被提出以确保机器人在复杂环境中的安全性,但许多方法基于机器人预先知晓障碍物位置与几何形状的假设。本文提出一种基于视觉的实时框架,能够完全在线构建未知环境的连续、一阶可微符号距离场,无需任何预训练,且与现有基于SDF的反应式控制器完全兼容。为在实际感知条件下实现鲁棒性能,本方法显式考虑低成本RGB-D相机的噪声,在线优化神经SDF表征以获得更平滑的几何形状与稳定的梯度估计。我们通过仿真与Fetch机器人实物实验验证了所提方法的有效性。视频及补充材料详见 https://satyajeetburla.github.io/rnbf/。