We propose a novel pipeline for unknown object grasping in shared robotic autonomy scenarios. State-of-the-art methods for fully autonomous scenarios are typically learning-based approaches optimised for a specific end-effector, that generate grasp poses directly from sensor input. In the domain of assistive robotics, we seek instead to utilise the user's cognitive abilities for enhanced satisfaction, grasping performance, and alignment with their high level task-specific goals. Given a pair of stereo images, we perform unknown object instance segmentation and generate a 3D reconstruction of the object of interest. In shared control, the user then guides the robot end-effector across a virtual hemisphere centered around the object to their desired approach direction. A physics-based grasp planner finds the most stable local grasp on the reconstruction, and finally the user is guided by shared control to this grasp. In experiments on the DLR EDAN platform, we report a grasp success rate of 87% for 10 unknown objects, and demonstrate the method's capability to grasp objects in structured clutter and from shelves.
翻译:我们提出了一种面向共享机器人自主场景中未知物体抓取的新型流水线。当前最先进的完全自主场景方法通常基于学习,针对特定末端执行器优化,并直接从传感器输入生成抓取位姿。而在辅助机器人领域,我们转而利用用户的认知能力来提升满意度、抓取性能以及与其高层任务目标的契合度。给定一对立体图像,我们执行未知物体实例分割,并对目标物体进行三维重建。在共享控制中,用户引导机器人末端执行器在围绕物体的虚拟半球面上移动到期望的接近方向。基于物理的抓取规划器在重建模型上找到最稳定的局部抓取点,随后用户通过共享控制被引导至该抓取点。在DLR EDAN平台上的实验中,我们针对10个未知物体报告了87%的抓取成功率,并展示了该方法在结构化杂乱环境及货架上抓取物体的能力。