Robotic grasping is an essential capability, playing a critical role in enabling robots to physically interact with their surroundings. Despite extensive research, challenges remain due to the diverse shapes and properties of target objects, inaccuracies in sensing, and potential collisions with the environment. In this work, we propose a method for effectively grasping in cluttered bin-picking environments where these challenges intersect. We utilize a multi-functional gripper that combines both suction and finger grasping to handle a wide range of objects. We also present an active gripper adaptation strategy to minimize collisions between the gripper hardware and the surrounding environment by actively leveraging the reciprocating suction cup and reconfigurable finger motion. To fully utilize the gripper's capabilities, we built a neural network that detects suction and finger grasp points from a single input RGB-D image. This network is trained using a larger-scale synthetic dataset generated from simulation. In addition to this, we propose an efficient approach to constructing a real-world dataset that facilitates grasp point detection on various objects with diverse characteristics. Experiment results show that the proposed method can grasp objects in cluttered bin-picking scenarios and prevent collisions with environmental constraints such as a corner of the bin. Our proposed method demonstrated its effectiveness in the 9th Robotic Grasping and Manipulation Competition (RGMC) held at ICRA 2024.
翻译:机器人抓取是一项关键能力,对于实现机器人与物理环境的交互至关重要。尽管已有大量研究,但由于目标物体形状与特性的多样性、感知的不准确性以及与环境发生碰撞的可能性,该领域仍面临诸多挑战。本研究提出一种方法,旨在有效应对上述挑战交织的杂乱箱内分拣环境中的抓取任务。我们采用一种结合吸盘吸附与手指抓取功能的多功能夹爪,以处理各类物体。同时,我们提出一种主动夹爪适配策略,通过主动利用往复式吸盘与可重构手指运动,最大限度地减少夹爪硬件与周围环境之间的碰撞。为充分发挥该夹爪的能力,我们构建了一个神经网络,可从单幅输入RGB-D图像中检测吸盘吸附点与手指抓取点。该网络使用基于仿真生成的大规模合成数据集进行训练。此外,我们提出一种高效构建真实世界数据集的方法,以促进针对具有不同特性的各类物体的抓取点检测。实验结果表明,所提方法能够在杂乱的箱内分拣场景中抓取物体,并避免与箱体角落等环境约束发生碰撞。我们提出的方法在ICRA 2024举办的第九届机器人抓取与操作竞赛(RGMC)中验证了其有效性。