This paper presents a novel vision-based proprioception approach for a soft robotic finger that can estimate and reconstruct tactile interactions in both terrestrial and aquatic environments. The key to this system lies in the finger's unique metamaterial structure, which facilitates omni-directional passive adaptation during grasping, protecting delicate objects across diverse scenarios. A compact in-finger camera captures high-framerate images of the finger's deformation during contact, extracting crucial tactile data in real-time. We present a volumetric discretized model of the soft finger and use the geometry constraints captured by the camera to find the optimal estimation of the deformed shape. The approach is benchmarked using a motion capture system with sparse markers and a haptic device with dense measurements. Both results show state-of-the-art accuracies, with a median error of 1.96 mm for overall body deformation, corresponding to 2.1% of the finger's length. More importantly, the state estimation is robust in both on-land and underwater environments as we demonstrate its usage for underwater object shape sensing. This combination of passive adaptation and real-time tactile sensing paves the way for amphibious robotic grasping applications.
翻译:本文提出了一种基于视觉的新型本体感知方法,用于软体机器人手指,该方法能够在陆地和水生环境中估计并重建触觉交互。该系统的关键在于手指独特的超材料结构,该结构促进了抓取过程中的全向被动适应,从而在各种场景下保护精细物体。一个紧凑的手指内置摄像头捕捉接触过程中手指变形的高帧率图像,实时提取关键的触觉数据。我们提出了软体手指的体积离散化模型,并利用摄像头捕获的几何约束来寻找变形形状的最优估计。该方法使用带有稀疏标记点的运动捕捉系统和具有密集测量的触觉设备进行了基准测试。两项结果均显示出最先进的精度,整体身体变形的中值误差为1.96毫米,相当于手指长度的2.1%。更重要的是,状态估计在陆地和水下环境中均表现出鲁棒性,我们通过其在水下物体形状感知中的应用进行了演示。这种被动适应与实时触觉感知的结合为两栖机器人抓取应用铺平了道路。