As robots are increasingly deployed in diverse application domains, enabling robust mobility across different embodiments has become a critical challenge. Classical mobility stacks, though effective on specific platforms, require extensive per-robot tuning and do not scale easily to new embodiments. Learning-based approaches, such as imitation learning (IL), offer alternatives, but face significant limitations on the need for high-quality demonstrations for each embodiment. To address these challenges, we introduce COMPASS, a unified framework that enables scalable cross-embodiment mobility using expert demonstrations from only a single embodiment. We first pre-train a mobility policy on a single robot using IL, combining a world model with a policy model. We then apply residual reinforcement learning (RL) to efficiently adapt this policy to diverse embodiments through corrective refinements. Finally, we distill specialist policies into a single generalist policy conditioned on an embodiment embedding vector. This design significantly reduces the burden of collecting data while enabling robust generalization across a wide range of robot designs. Our experiments demonstrate that COMPASS scales effectively across diverse robot platforms while maintaining adaptability to various environment configurations, achieving a generalist policy with a success rate approximately 5X higher than the pre-trained IL policy on unseen embodiments, and further demonstrates zero-shot sim-to-real transfer.
翻译:随着机器人在不同应用领域中的部署日益广泛,实现跨不同具身的鲁棒移动能力已成为一项关键挑战。经典的移动技术栈虽然在特定平台上有效,但需要对每个机器人进行大量调优,且难以轻松扩展到新的具身。基于学习的方法,如模仿学习(IL),提供了替代方案,但面临着为每个具身都需要高质量演示数据的显著限制。为应对这些挑战,我们提出了COMPASS,一个统一的框架,它能够仅利用单一具身的专家演示来实现可扩展的跨具身移动。我们首先使用IL在单个机器人上预训练一个移动策略,该策略结合了世界模型与策略模型。然后,我们应用残差强化学习(RL),通过修正性精炼,高效地将该策略适配到不同的具身。最后,我们将多个专家策略提炼成一个以具身嵌入向量为条件的通用策略。这种设计显著减轻了数据收集的负担,同时实现了跨广泛机器人设计的鲁棒泛化。我们的实验表明,COMPASS能够有效地扩展到不同的机器人平台,同时保持对各种环境配置的适应性,所获得的通用策略在未见具身上的成功率比预训练的IL策略高出约5倍,并进一步展示了零样本的仿真到真实迁移能力。