To substantially enhance robot intelligence, there is a pressing need to develop a large model that enables general-purpose robots to proficiently undertake a broad spectrum of manipulation tasks, akin to the versatile task-planning ability exhibited by LLMs. The vast diversity in objects, robots, and manipulation tasks presents huge challenges. Our work introduces a comprehensive framework to develop a foundation model for general robotic manipulation that formalizes a manipulation task as contact synthesis. Specifically, our model takes as input object and robot manipulator point clouds, object physical attributes, target motions, and manipulation region masks. It outputs contact points on the object and associated contact forces or post-contact motions for robots to achieve the desired manipulation task. We perform extensive experiments both in the simulation and real-world settings, manipulating articulated rigid objects, rigid objects, and deformable objects that vary in dimensionality, ranging from one-dimensional objects like ropes to two-dimensional objects like cloth and extending to three-dimensional objects such as plasticine. Our model achieves average success rates of around 90\%. Supplementary materials and videos are available on our project website at https://manifoundationmodel.github.io/.
翻译:为大幅提升机器人智能水平,亟需开发一种大型模型,使通用机器人能够熟练执行广泛的操控任务,类似于大型语言模型所展现的通用任务规划能力。物体、机器人及操控任务的巨大差异性构成了重大挑战。本研究提出了一个综合性框架,用于开发通用机器人操作的基础模型,该模型将操控任务形式化为接触合成问题。具体而言,我们的模型以物体与机器人操作器的点云数据、物体物理属性、目标运动状态及操作区域掩码作为输入,输出物体表面的接触点及相应的接触力或后接触运动轨迹,从而使机器人完成预期操控任务。我们在仿真与真实场景中进行了广泛实验,操作对象涵盖铰接刚体、刚性物体及可变形物体,其维度分布从一维物体(如绳索)到二维物体(如布料)乃至三维物体(如塑性材料)。实验结果表明,我们的模型平均成功率约为90%。补充材料与演示视频详见项目网站 https://manifoundationmodel.github.io/。