Grasp planning and estimation have been a longstanding research problem in robotics, with two main approaches to find graspable poses on the objects: 1) geometric approach, which relies on 3D models of objects and the gripper to estimate valid grasp poses, and 2) data-driven, learning-based approach, with models trained to identify grasp poses from raw sensor observations. The latter assumes comprehensive geometric coverage during the training phase. However, the data-driven approach is typically biased toward tabletop scenarios and struggle to generalize to out-of-distribution scenarios with larger objects (e.g. chair). Additionally, raw sensor data (e.g. RGB-D data) from a single view of these larger objects is often incomplete and necessitates additional observations. In this paper, we take a geometric approach, leveraging advancements in object modeling (e.g. NeRF) to build an implicit model by taking RGB images from views around the target object. This model enables the extraction of explicit mesh model while also capturing the visual appearance from novel viewpoints that is useful for perception tasks like object detection and pose estimation. We further decompose the NeRF-reconstructed 3D mesh into superquadrics (SQs) -- parametric geometric primitives, each mapped to a set of precomputed grasp poses, allowing grasp composition on the target object based on these primitives. Our proposed pipeline overcomes the problems: a) noisy depth and incomplete view of the object, with a modeling step, and b) generalization to objects of any size. For more qualitative results, refer to the supplementary video and webpage https://bit.ly/3ZrOanU
翻译:抓取规划与估计一直是机器人学中长期存在的研究问题,其主要方法有两种:1)几何方法,该方法依赖于物体和夹爪的三维模型来估计有效的抓取姿态;2)数据驱动的、基于学习的方法,该方法训练模型从原始传感器观测中识别抓取姿态。后者假设在训练阶段具有全面的几何覆盖。然而,数据驱动的方法通常偏向于桌面场景,难以泛化到包含较大物体(例如椅子)的分布外场景。此外,从这些较大物体的单一视角获取的原始传感器数据(例如RGB-D数据)通常不完整,需要额外的观测。在本文中,我们采用几何方法,利用物体建模(例如NeRF)方面的进展,通过采集目标物体周围视角的RGB图像来构建隐式模型。该模型能够提取显式的网格模型,同时也能从新视角捕捉视觉外观,这对于物体检测和姿态估计等感知任务非常有用。我们进一步将NeRF重建的三维网格分解为超二次曲面——参数化几何基元,每个基元映射到一组预计算的抓取姿态,从而允许基于这些基元在目标物体上进行抓取组合。我们提出的流程克服了以下问题:a)通过建模步骤处理有噪声的深度和不完整的物体视角;b)泛化到任意尺寸的物体。更多定性结果,请参阅补充视频和网页 https://bit.ly/3ZrOanU。