Scalable robot policy pre-training has been hindered by the high cost of collecting high-quality demonstrations for each platform. In this study, we address this issue by uniting offline reinforcement learning (offline RL) with cross-embodiment learning. Offline RL leverages both expert and abundant suboptimal data, and cross-embodiment learning aggregates heterogeneous robot trajectories across diverse morphologies to acquire universal control priors. We perform a systematic analysis of this offline RL and cross-embodiment paradigm, providing a principled understanding of its strengths and limitations. To evaluate this offline RL and cross-embodiment paradigm, we construct a suite of locomotion datasets spanning 16 distinct robot platforms. Our experiments confirm that this combined approach excels at pre-training with datasets rich in suboptimal trajectories, outperforming pure behavior cloning. However, as the proportion of suboptimal data and the number of robot types increase, we observe that conflicting gradients across morphologies begin to impede learning. To mitigate this, we introduce an embodiment-based grouping strategy in which robots are clustered by morphological similarity and the model is updated with a group gradient. This simple, static grouping substantially reduces inter-robot conflicts and outperforms existing conflict-resolution methods.
翻译:可扩展的机器人策略预训练一直受限于为每个平台收集高质量演示的高昂成本。在本研究中,我们通过将离线强化学习与跨具身学习相结合来解决这一问题。离线强化学习同时利用专家数据和丰富的次优数据,而跨具身学习则聚合不同形态的异构机器人轨迹以获取通用的控制先验。我们对这种离线强化学习与跨具身范式进行了系统性分析,为其优势和局限提供了原理性的理解。为了评估该范式,我们构建了一个涵盖16个不同机器人平台的运动数据集套件。实验证实,这种组合方法在利用富含次优轨迹的数据集进行预训练时表现出色,优于纯行为克隆。然而,随着次优数据比例和机器人类型数量的增加,我们观察到不同形态间的梯度冲突开始阻碍学习。为缓解此问题,我们引入了一种基于具身的分组策略:根据形态相似性对机器人进行聚类,并使用组梯度更新模型。这种简单、静态的分组方法显著减少了机器人间的冲突,并优于现有的冲突解决方法。