This paper focuses on transferring control policies between robot manipulators with different morphology. While reinforcement learning (RL) methods have shown successful results in robot manipulation tasks, transferring a trained policy from simulation to a real robot or deploying it on a robot with different states, actions, or kinematics is challenging. To achieve cross-embodiment policy transfer, our key insight is to project the state and action spaces of the source and target robots to a common latent space representation. We first introduce encoders and decoders to associate the states and actions of the source robot with a latent space. The encoders, decoders, and a latent space control policy are trained simultaneously using loss functions measuring task performance, latent dynamics consistency, and encoder-decoder ability to reconstruct the original states and actions. To transfer the learned control policy, we only need to train target encoders and decoders that align a new target domain to the latent space. We use generative adversarial training with cycle consistency and latent dynamics losses without access to the task reward or reward tuning in the target domain. We demonstrate sim-to-sim and sim-to-real manipulation policy transfer with source and target robots of different states, actions, and embodiments. The source code is available at \url{https://github.com/ExistentialRobotics/cross_embodiment_transfer}.
翻译:本文聚焦于在不同形态的机器人操作臂之间迁移控制策略。尽管强化学习方法在机器人操作任务中已展现出显著成效,但将训练好的策略从仿真环境迁移至真实机器人,或将其部署于具有不同状态、动作或运动学特性的机器人上,仍面临巨大挑战。为实现跨形态策略迁移,本文的核心思路是将源机器人与目标机器人的状态空间和动作空间映射至一个共同的潜在空间表示。我们首先引入编码器与解码器,将源机器人的状态和动作与潜在空间相关联。通过同时优化任务性能、潜在动力学一致性以及编码器-解码器重构原始状态与动作能力的损失函数,对编码器、解码器及潜在空间控制策略进行协同训练。为实现已学习控制策略的迁移,仅需训练目标编码器与解码器,将新的目标域对齐至该潜在空间。我们采用生成对抗训练方法,结合循环一致性损失与潜在动力学损失,且无需在目标域中获取任务奖励或进行奖励函数调优。本文通过具有不同状态、动作及形态的源机器人与目标机器人,分别展示了仿真到仿真以及仿真到真实环境的操作策略迁移实验。源代码公开于 \url{https://github.com/ExistentialRobotics/cross_embodiment_transfer}。