We propose a contact-explicit hierarchical architecture coupling Reinforcement Learning (RL) and Model Predictive Control (MPC), where a high-level RL agent provides gait and navigation commands to a low-level locomotion MPC. This offloads the combinatorial burden of contact timing from the MPC by learning acyclic gaits through trial and error in simulation. We show that only a minimal set of rewards and limited tuning are required to obtain effective policies. We validate the architecture in simulation across robotic platforms spanning 50 kg to 120 kg and different MPC implementations, observing the emergence of acyclic gaits and timing adaptations in flat-terrain legged and hybrid locomotion, and further demonstrating extensibility to non-flat terrains. Across all platforms, we achieve zero-shot sim-to-sim transfer without domain randomization, and we further demonstrate zero-shot sim-to-real transfer without domain randomization on Centauro, our 120 kg wheeled-legged humanoid robot. We make our software framework and evaluation results publicly available at https://github.com/AndrePatri/AugMPC.
翻译:我们提出了一种显式接触的分层架构,将强化学习与模型预测控制相结合,其中高层强化学习智能体向底层运动模型预测控制器提供步态与导航指令。该方法通过在仿真环境中进行试错学习非周期步态,从而将接触时序的组合优化负担从模型预测控制器中剥离。研究表明,仅需最小化的奖励集合与有限的参数调整即可获得有效策略。我们在仿真环境中验证了该架构在50公斤至120公斤不同机器人平台及多种模型预测控制实现方案上的适用性,观察到在平坦地形腿式与混合式运动中非周期步态与时序适应能力的涌现,并进一步证明了其在非平坦地形上的扩展性。在所有平台上,我们无需领域随机化即可实现零样本仿真到仿真的迁移,并在Centauro(我们研制的120公斤轮腿式人形机器人)上进一步展示了无需领域随机化的零样本仿真到实物的迁移。我们的软件框架与评估结果已在https://github.com/AndrePatri/AugMPC公开。