In the field of Robot Learning, the complex mapping between high-dimensional observations such as RGB images and low-level robotic actions, two inherently very different spaces, constitutes a complex learning problem, especially with limited amounts of data. In this work, we introduce Render and Diffuse (R&D) a method that unifies low-level robot actions and RGB observations within the image space using virtual renders of the 3D model of the robot. Using this joint observation-action representation it computes low-level robot actions using a learnt diffusion process that iteratively updates the virtual renders of the robot. This space unification simplifies the learning problem and introduces inductive biases that are crucial for sample efficiency and spatial generalisation. We thoroughly evaluate several variants of R&D in simulation and showcase their applicability on six everyday tasks in the real world. Our results show that R&D exhibits strong spatial generalisation capabilities and is more sample efficient than more common image-to-action methods.
翻译:在机器人学习领域,高维观测(如RGB图像)与低层机器人动作之间复杂的映射关系构成了一个困难的学习问题,因为这两个空间本质差异巨大,尤其是在数据量有限的情况下。本文提出"渲染与扩散"方法,该方法利用机器人三维模型的虚拟渲染,将低层机器人动作与RGB观测统一在图像空间内。通过这种联合的观测-动作表示,该方法使用一个学习到的扩散过程来计算低层机器人动作,该过程会迭代更新机器人的虚拟渲染。这种空间统一简化了学习问题,并引入了对样本效率和空间泛化至关重要的归纳偏置。我们在仿真环境中全面评估了R&D的多种变体,并在现实世界的六项日常任务中展示了其适用性。结果表明,R&D展现出强大的空间泛化能力,并且比更常见的图像到动作方法具有更高的样本效率。