Training reinforcement learning (RL) policies for legged robots remains challenging due to high-dimensional continuous actions, hardware constraints, and limited exploration. Existing methods for locomotion and whole-body control work well for position-based control with environment-specific heuristics (e.g., reward shaping, curriculum design, and manual initialization), but are less effective for torque-based control, where sufficiently exploring the action space and obtaining informative gradient signals for training is significantly more difficult. We introduce Growing Policy Optimization (GPO), a training framework that applies a time-varying action transformation to restrict the effective action space in the early stage, thereby encouraging more effective data collection and policy learning, and then progressively expands it to enhance exploration and achieve higher expected return. We prove that this transformation preserves the PPO update rule and introduces only bounded, vanishing gradient distortion, thereby ensuring stable training. We evaluate GPO on both quadruped and hexapod robots, including zero-shot deployment of simulation-trained policies on hardware. Policies trained with GPO consistently achieve better performance. These results suggest that GPO provides a general, environment-agnostic optimization framework for learning legged locomotion.
翻译:训练足式机器人的强化学习策略仍然具有挑战性,原因在于高维连续动作、硬件约束以及有限的探索能力。现有的运动与全身控制方法在基于位置的控制(配合环境特定的启发式方法,如奖励塑形、课程设计和手动初始化)中表现良好,但对于基于力矩的控制则效果欠佳,因为在后者中充分探索动作空间并获取用于训练的信息性梯度信号要困难得多。我们提出了成长策略优化,这是一种训练框架,它应用时变的动作变换在早期阶段限制有效动作空间,从而鼓励更有效的数据收集与策略学习,然后逐步扩展该空间以增强探索并实现更高的期望回报。我们证明了该变换保持了PPO更新规则,并且仅引入了有界的、趋于零的梯度失真,从而确保了训练的稳定性。我们在四足和六足机器人上评估了GPO,包括将仿真训练的策略在硬件上进行零样本部署。使用GPO训练的策略始终能获得更好的性能。这些结果表明,GPO为学习足式运动提供了一个通用的、与环境无关的优化框架。