Humanoid robots can suffer significant performance drops under small changes in dynamics, task specifications, or environment setup. We propose HoRD, a two-stage learning framework for robust humanoid control under domain shift. First, we train a high-performance teacher policy via history-conditioned reinforcement learning, where the policy infers latent dynamics context from recent state--action trajectories to adapt online to diverse randomized dynamics. Second, we perform online distillation to transfer the teacher's robust control capabilities into a transformer-based student policy that operates on sparse root-relative 3D joint keypoint trajectories. By combining history-conditioned adaptation with online distillation, HoRD enables a single policy to adapt zero-shot to unseen domains without per-domain retraining. Extensive experiments show HoRD outperforms strong baselines in robustness and transfer, especially under unseen domains and external perturbations. Code and project page are available at https://tonywang-0517.github.io/hord/.
翻译:人形机器人在动力学、任务规范或环境设置发生微小变化时,其性能可能出现显著下降。本文提出HoRD——一种面向领域偏移下鲁棒人形机器人控制的两阶段学习框架。首先,我们通过历史条件强化学习训练一个高性能教师策略,该策略从近期的状态-动作轨迹中推断潜在动力学上下文,从而在线适应多样化的随机化动力学。其次,我们执行在线蒸馏,将教师的鲁棒控制能力迁移至基于Transformer的学生策略,该策略以稀疏的根相对三维关节关键点轨迹作为输入。通过将历史条件适应与在线蒸馏相结合,HoRD使得单一策略能够零样本适应未见领域,而无需针对每个领域重新训练。大量实验表明,HoRD在鲁棒性与迁移性方面均优于现有基线方法,尤其是在未见领域和外部扰动场景下。代码与项目页面详见 https://tonywang-0517.github.io/hord/。