Current visual navigation systems often treat the environment as static, lacking the ability to adaptively interact with obstacles. This limitation leads to navigation failure when encountering unavoidable obstructions. In response, we introduce IN-Sight, a novel approach to self-supervised path planning, enabling more effective navigation strategies through interaction with obstacles. Utilizing RGB-D observations, IN-Sight calculates traversability scores and incorporates them into a semantic map, facilitating long-range path planning in complex, maze-like environments. To precisely navigate around obstacles, IN-Sight employs a local planner, trained imperatively on a differentiable costmap using representation learning techniques. The entire framework undergoes end-to-end training within the state-of-the-art photorealistic Intel SPEAR Simulator. We validate the effectiveness of IN-Sight through extensive benchmarking in a variety of simulated scenarios and ablation studies. Moreover, we demonstrate the system's real-world applicability with zero-shot sim-to-real transfer, deploying our planner on the legged robot platform ANYmal, showcasing its practical potential for interactive navigation in real environments.
翻译:当前视觉导航系统通常将环境视为静态,缺乏与障碍物进行自适应交互的能力。这一局限导致系统在遇到不可避免的障碍时导航失败。为此,我们提出IN-Sight——一种新颖的自监督路径规划方法,通过与环境障碍物的交互实现更有效的导航策略。该方法利用RGB-D观测数据计算可通行性评分,并将其整合至语义地图中,从而在复杂类迷宫环境中实现长距离路径规划。为精确规避障碍物,IN-Sight采用基于表征学习技术在可微分代价地图上进行指令式训练的局部规划器。整个框架在业界领先的英特尔SPEAR照片级仿真器中完成端到端训练。我们通过在多种仿真场景中的系统性基准测试与消融实验验证了IN-Sight的有效性。此外,通过零样本仿真到现实迁移,我们将规划器部署于腿式机器人平台ANYmal,展示了该系统在真实环境中实现交互式导航的实际潜力。