Tactile sensing allows robots to gather detailed geometric information about objects through physical interaction, complementing vision-based approaches. However, efficiently acquiring useful tactile data remains challenging due to the time-consuming nature of physical contact and the need to strategically choose contact locations that maximize information gain while minimizing physical interactions. This paper studies how different contact modes affect object shape reconstruction using a tactile-enabled dexterous gripper. We compare three contact interaction modes: grasp-releasing, sliding induced by finger-grazing, and palm-rolling. These contact modes are combined with an information-theoretic exploration framework that guides subsequent sampling locations using a shape completion model. Our results show that the improved tactile sensing efficiency of finger-grazing and palm-rolling translates into faster convergence in shape reconstruction, requiring 34% fewer physical interactions while improving reconstruction accuracy by 55%. We validate our approach using a UR5e robot arm equipped with an Inspire-Robots Dexterous Hand, showing robust performance across primitive object geometries.
翻译:触觉感知使机器人能够通过物理交互获取物体的详细几何信息,从而对基于视觉的方法形成补充。然而,由于物理接触本身耗时,且需要策略性地选择能最大化信息增益同时最小化物理交互的接触位置,高效获取有用的触觉数据仍然具有挑战性。本文研究了使用具备触觉能力的灵巧夹爪时,不同接触模式如何影响物体形状重建。我们比较了三种接触交互模式:抓握释放、由手指轻擦引发的滑动以及手掌滚动。这些接触模式与一个信息论探索框架相结合,该框架利用形状补全模型来指导后续的采样位置。我们的结果表明,手指轻擦和手掌滚动模式所提升的触觉感知效率,转化为形状重建中更快的收敛速度,在将重建精度提高55%的同时,所需的物理交互次数减少了34%。我们使用配备Inspire-Robots灵巧手的UR5e机器人手臂验证了我们的方法,该方法在多种基本物体几何形状上均表现出鲁棒的性能。