We present a self-supervised sensorimotor pre-training approach for robotics. Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens. Given a sequence of camera images, proprioceptive robot states, and actions, we encode the sequence into tokens, mask out a subset, and train a model to predict the missing content from the rest. We hypothesize that if a robot can predict the masked-out content it will have acquired a good model of the physical world that can enable it to act. RPT is designed to operate on latent visual representations which makes prediction tractable, enables scaling to larger models, and allows fast inference on a real robot. To evaluate our approach, we collected a dataset of 20,000 real-world trajectories over 9 months using a combination of motion planning and grasping algorithms. We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
翻译:我们提出了一种用于机器人的自监督感觉运动预训练方法。我们的模型名为RPT,是一种对感觉运动标记序列进行操作的Transformer。给定一系列相机图像、本体感觉机器人状态和动作,我们将该序列编码为标记,掩码掉子集,并训练模型根据其余部分预测缺失内容。我们假设,如果机器人能预测被掩码的内容,它将获得一个良好的物理世界模型,从而能够执行动作。RPT设计用于处理潜在视觉表征,这使得预测易于处理,能扩展到更大模型,并支持在真实机器人上进行快速推理。为评估我们的方法,我们在9个月内使用运动规划与抓取算法的组合,收集了包含20,000条真实世界轨迹的数据集。研究发现,感觉运动预训练始终优于从头训练,具有有利的扩展特性,并能实现跨不同任务、环境和机器人的迁移。