We propose a control framework that integrates model-based bipedal locomotion with residual reinforcement learning (RL) to achieve robust and adaptive walking in the presence of real-world uncertainties. Our approach leverages a model-based controller, comprising a Divergent Component of Motion (DCM) trajectory planner and a whole-body controller, as a reliable base policy. To address the uncertainties of inaccurate dynamics modeling and sensor noise, we introduce a residual policy trained through RL with domain randomization. Crucially, we employ a model-based oracle policy, which has privileged access to ground-truth dynamics during training, to supervise the residual policy via a novel supervised loss. This supervision enables the policy to efficiently learn corrective behaviors that compensate for unmodeled effects without extensive reward shaping. Our method demonstrates improved robustness and generalization across a range of randomized conditions, offering a scalable solution for sim-to-real transfer in bipedal locomotion.
翻译:我们提出了一种控制框架,将基于模型的双足步态控制与残差强化学习相结合,以在现实世界不确定性存在的情况下实现鲁棒且自适应的行走。我们的方法利用一个基于模型的控制器——包含发散运动分量轨迹规划器和全身控制器——作为可靠的基策略。为应对不精确动力学建模和传感器噪声带来的不确定性,我们引入了一个通过领域随机化强化学习训练的残差策略。关键的是,我们采用了一个基于模型的先知策略,该策略在训练期间拥有对真实动力学的特权访问权限,通过一种新颖的监督损失来指导残差策略。这种监督使策略能够高效学习补偿未建模效应的校正行为,而无需大量的奖励塑形。我们的方法在一系列随机化条件下展现出改进的鲁棒性和泛化能力,为双足步态的仿真到现实迁移提供了一个可扩展的解决方案。