We formulate grasp learning as a neural field and present Neural Grasp Distance Fields (NGDF). Here, the input is a 6D pose of a robot end effector and output is a distance to a continuous manifold of valid grasps for an object. In contrast to current approaches that predict a set of discrete candidate grasps, the distance-based NGDF representation is easily interpreted as a cost, and minimizing this cost produces a successful grasp pose. This grasp distance cost can be incorporated directly into a trajectory optimizer for joint optimization with other costs such as trajectory smoothness and collision avoidance. During optimization, as the various costs are balanced and minimized, the grasp target is allowed to smoothly vary, as the learned grasp field is continuous. We evaluate NGDF on joint grasp and motion planning in simulation and the real world, outperforming baselines by 63% execution success while generalizing to unseen query poses and unseen object shapes. Project page: https://sites.google.com/view/neural-grasp-distance-fields.
翻译:我们将抓取学习建模为神经场,提出神经抓取距离场(NGDF)。该方法以机器人末端执行器的6D位姿为输入,输出该位姿到目标物体有效抓取连续流形的距离。与当前预测离散候选抓取集合的方法不同,基于距离的NGDF表示天然具有代价函数的可解释性,最小化该代价即可获得成功抓取位姿。该抓取距离代价可直接集成到轨迹优化器中,与轨迹平滑度、避碰等其他代价进行联合优化。在优化过程中,各类代价经过平衡与最小化,抓取目标可随学习到的连续抓取场平滑变动。我们在仿真与实际场景中评估了NGDF在联合抓取与运动规划上的表现,相较于基线方法执行成功率提升63%,同时能泛化至未见查询位姿与未知物体形状。项目页面:https://sites.google.com/view/neural-grasp-distance-fields。