We introduce GEOTACT, the first robotic system capable of grasping and retrieving objects of potentially unknown shapes buried in a granular environment. While important in many applications, ranging from mining and exploration to search and rescue, this type of interaction with granular media is difficult due to the uncertainty stemming from visual occlusion and noisy contact signals. To address these challenges, we use a learning method relying exclusively on touch feedback, trained end-to-end with simulated sensor noise. We show that our problem formulation leads to the natural emergence of learned pushing behaviors that the manipulator uses to reduce uncertainty and funnel the object to a stable grasp despite spurious and noisy tactile readings. We introduce a training curriculum that bootstraps learning in simulated granular environments, enabling zero-shot transfer to real hardware. Despite being trained only on seven objects with primitive shapes, our method is shown to successfully retrieve 35 different objects, including rigid, deformable, and articulated objects with complex shapes. Videos and additional information can be found at https://jxu.ai/geotact.
翻译:我们提出了GEOTACT,这是首个能够抓取并取出埋藏在颗粒环境中、可能具有未知形状物体的机器人系统。尽管这种与颗粒介质的交互在采矿、勘探、搜救等众多应用中具有重要意义,但由于视觉遮挡和噪声接触信号带来的不确定性,此类操作十分困难。为应对这些挑战,我们采用了一种仅依赖触觉反馈的学习方法,并通过模拟传感器噪声进行端到端训练。研究表明,我们的问题建模方式促使机械臂自然习得了推动行为策略,利用该策略可在存在虚假及噪声触觉读数的情况下降低不确定性,并将物体引导至稳定抓取位置。我们提出了一种在模拟颗粒环境中引导学习的训练课程,实现了向真实硬件的零样本迁移。尽管仅使用七种基础形状物体进行训练,我们的方法成功抓取了35种不同物体,包括具有复杂形状的刚性、可变形及铰接式物体。视频及更多信息请访问 https://jxu.ai/geotact。