This paper presents a novel manipulation strategy that uses keypoint correspondences extracted from visuo-tactile sensor images to facilitate precise object manipulation. Our approach uses the visuo-tactile feedback to guide the robot's actions for accurate object grasping and placement, eliminating the need for post-grasp adjustments and extensive training. This method provides an improvement in deployment efficiency, addressing the challenges of manipulation tasks in environments where object locations are not predefined. We validate the effectiveness of our strategy through experiments demonstrating the extraction of keypoint correspondences and their application to real-world tasks such as block alignment and gear insertion, which require millimeter-level precision. The results show an average error margin significantly lower than that of traditional vision-based methods, which is sufficient to achieve the target tasks.
翻译:本文提出一种新颖的操控策略,通过从视觉-触觉传感器图像中提取关键点对应关系来实现精确的物体操控。该方法利用视觉-触觉反馈引导机器人执行准确的抓取与放置操作,无需抓取后调整或大量训练。本方法提升了部署效率,有效解决了物体位置非预定义环境中的操控难题。我们通过实验验证了该策略的有效性,展示了关键点对应关系的提取及其在毫米级精度任务(如积木对齐与齿轮装配)中的实际应用。实验结果表明,该方法平均误差显著低于传统视觉方法,足以完成目标任务。