Legged robots with egocentric forward-facing depth cameras can couple exteroception and proprioception to achieve robust forward agility on complex terrain. When these robots walk backward, the forward-only field of view provides no preview. Purely proprioceptive controllers can remain stable on moderate ground when moving backward but cannot fully exploit the robot's capabilities on complex terrain and must collide with obstacles. We present Look Forward to Walk Backward (LF2WB), an efficient terrain-memory locomotion framework that uses forward egocentric depth and proprioception to write a compact associative memory during forward motion and to retrieve it for collision-free backward locomotion without rearward vision. The memory backbone employs a delta-rule selective update that softly removes then writes the memory state along the active subspace. Training uses hardware-efficient parallel computation, and deployment runs recurrent, constant-time per-step inference with a constant-size state, making the approach suitable for onboard processors on low-cost robots. Experiments in both simulations and real-world scenarios demonstrate the effectiveness of our method, improving backward agility across complex terrains under limited sensing.
翻译:配备以自我为中心的前向深度摄像头的腿式机器人能够结合外感知与本体感知,在复杂地形上实现稳健的前向敏捷运动。当这些机器人向后行走时,仅前向的视野无法提供预览信息。纯本体感知控制器在中等地面进行后向移动时能保持稳定,但无法在复杂地形上充分发挥机器人的能力,且必然会与障碍物发生碰撞。我们提出“前瞻以行后”(LF2WB),一种高效的地形记忆运动框架,该框架利用前向自我中心深度信息与本体感知,在前向运动期间写入紧凑的关联记忆,并在无后向视觉的情况下检索该记忆以实现无碰撞的后向运动。记忆骨干网络采用增量规则选择性更新,沿活跃子空间对记忆状态进行软擦除后写入。训练过程采用硬件高效的并行计算,部署时则运行循环的、每步恒定时间的推理,并保持恒定大小的状态,使得该方法适用于低成本机器人的板载处理器。在仿真和真实场景中的实验证明了我们方法的有效性,其在有限感知条件下提升了跨复杂地形的后向敏捷性。