Humanoid robots can suffer significant performance drops under small changes in dynamics, task specifications, or environment setup. We propose HoRD, a two-stage learning framework for robust humanoid control under domain shift. First, we train a high-performance teacher policy via history-conditioned reinforcement learning, where the policy infers latent dynamics context from recent state--action trajectories to adapt online to diverse randomized dynamics. Second, we perform online distillation to transfer the teacher's robust control capabilities into a transformer-based student policy that operates on sparse root-relative 3D joint keypoint trajectories. By combining history-conditioned adaptation with online distillation, HoRD enables a single policy to adapt zero-shot to unseen domains without per-domain retraining. Extensive experiments show HoRD outperforms strong baselines in robustness and transfer, especially under unseen domains and external perturbations. Code and project page are available at \href{https://tonywang-0517.github.io/hord/}{https://tonywang-0517.github.io/hord/}.
翻译:人形机器人在动力学、任务规范或环境设置的微小变化下,其性能可能出现显著下降。我们提出了HoRD,一种用于领域偏移下鲁棒人形机器人控制的两阶段学习框架。首先,我们通过历史条件强化学习训练一个高性能教师策略,该策略从近期的状态-动作轨迹推断潜在的动力学上下文,从而在线适应多样化的随机化动力学。其次,我们执行在线蒸馏,将教师的鲁棒控制能力迁移到一个基于Transformer的学生策略中,该策略作用于稀疏的根相对三维关节关键点轨迹。通过将历史条件适应与在线蒸馏相结合,HoRD使得单一策略能够零样本适应未见过的领域,而无需针对每个领域重新训练。大量实验表明,HoRD在鲁棒性和迁移性方面优于强基线方法,尤其是在未见过的领域和外部扰动下。代码和项目页面可在 \href{https://tonywang-0517.github.io/hord/}{https://tonywang-0517.github.io/hord/} 获取。