We introduce AO-Grasp, a grasp proposal method that generates 6 DoF grasps that enable robots to interact with articulated objects, such as opening and closing cabinets and appliances. AO-Grasp consists of two main contributions: the AO-Grasp Model and the AO-Grasp Dataset. Given a segmented partial point cloud of a single articulated object, the AO-Grasp Model predicts the best grasp points on the object with an Actionable Grasp Point Predictor. Then, it finds corresponding grasp orientations for each of these points, resulting in stable and actionable grasp proposals. We train the AO-Grasp Model on our new AO-Grasp Dataset, which contains 78K actionable parallel-jaw grasps on synthetic articulated objects. In simulation, AO-Grasp achieves a 45.0 % grasp success rate, whereas the highest performing baseline achieves a 35.0% success rate. Additionally, we evaluate AO-Grasp on 120 real-world scenes of objects with varied geometries, articulation axes, and joint states, where AO-Grasp produces successful grasps on 67.5% of scenes, while the baseline only produces successful grasps on 33.3% of scenes. To the best of our knowledge, AO-Grasp is the first method for generating 6 DoF grasps on articulated objects directly from partial point clouds without requiring part detection or hand-designed grasp heuristics. Project website: https://stanford-iprl-lab.github.io/ao-grasp
翻译:我们提出AO-Grasp,一种抓取提案方法,能够生成6自由度抓取姿态,使机器人能够与铰接物体交互,例如打开和关闭橱柜及家用电器。AO-Grasp包含两大核心贡献:AO-Grasp模型与AO-Grasp数据集。给定单个铰接物体的分割局部点云,AO-Grasp模型通过可操作抓取点预测器预测物体上的最优抓取点。随后,它为每个点寻找对应的抓取方向,从而生成稳定且可操作的抓取提案。我们在新构建的AO-Grasp数据集上训练该模型,该数据集包含78K个合成铰接物体上的可操作平行爪抓取样例。在仿真环境中,AO-Grasp实现了45.0%的抓取成功率,而表现最佳的基线方法仅为35.0%。此外,我们在120个真实场景中评估AO-Grasp,这些场景涵盖不同几何形状、铰接轴与关节状态的物体。AO-Grasp在67.5%的场景中成功生成抓取,而基线方法仅在33.3%的场景中成功。据我们所知,AO-Grasp是首个无需部件检测或手工设计抓取启发式规则、直接基于局部点云为铰接物体生成6自由度抓取的方法。项目网站:https://stanford-iprl-lab.github.io/ao-grasp