Real-time prediction of deformation in highly compliant soft materials remains a significant challenge in soft robotics. While vision-based soft tactile sensors can track internal marker displacements, learning-based models for 3D contact estimation heavily depend on their training datasets, inherently limiting their ability to generalize to complex scenarios such as multi-point sensing. To address this limitation, we introduce TactiVerse, a U-Net-based framework that formulates contact geometry estimation as a spatial heatmap prediction task. Even when trained exclusively on a limited dataset of single-point indentations, our architecture achieves highly accurate single-point sensing, yielding a superior mean absolute error of 0.0589 mm compared to the 0.0612 mm of a conventional regression-based CNN baseline. Furthermore, we demonstrate that augmenting the training dataset with multi-point contact data substantially enhances the sensor's multi-point sensing capabilities, significantly improving the overall mean MAE for two-point discrimination from 1.214 mm to 0.383 mm. By successfully extrapolating complex contact geometries from fundamental interactions, this methodology unlocks advanced multi-point and large-area shape sensing. Ultimately, it significantly streamlines the development of marker-based soft sensors, offering a highly scalable solution for real-world tactile perception.
翻译:实时预测高顺应性软材料的形变在软体机器人领域仍是一项重大挑战。尽管基于视觉的软触觉传感器能够追踪内部标记点的位移,但用于三维接触估计的基于学习的模型严重依赖其训练数据集,这本质上限制了它们泛化至复杂场景(如多点感知)的能力。为解决这一局限性,我们提出了TactiVerse,一种基于U-Net的框架,将接触几何估计构建为空间热力图预测任务。即使仅在有限的单点压痕数据集上进行训练,我们的架构仍能实现高精度的单点感知,其平均绝对误差达到0.0589毫米,优于传统基于回归的CNN基线模型的0.0612毫米。此外,我们证明通过在训练数据集中加入多点接触数据,可显著增强传感器的多点感知能力,将两点辨别的整体平均MAE从1.214毫米大幅提升至0.383毫米。该方法通过从基础相互作用中成功外推复杂接触几何,实现了先进的多点与大面积形状感知。最终,它显著简化了基于标记的软传感器的开发流程,为现实世界的触觉感知提供了一个高度可扩展的解决方案。