Contact-rich manipulation remains a major challenge in robotics. Optical tactile sensors like GelSight Mini offer a low-cost solution for contact sensing by capturing soft-body deformations of the silicone gel. However, accurately inferring shear and normal force distributions from these gel deformations has yet to be fully addressed. In this work, we propose a machine learning approach using a U-net architecture to predict force distributions directly from the sensor's raw images. Our model, trained on force distributions inferred from \ac{fea}, demonstrates promising accuracy in predicting normal and shear force distributions for the commercially available GelSight Mini sensor. It also shows potential for generalization across indenters, sensors of the same type, and for enabling real-time application. The codebase, dataset and models are open-sourced and available at https://feats-ai.github.io .
翻译:接触密集型操作仍然是机器人学中的一个主要挑战。诸如GelSight Mini之类的光学触觉传感器通过捕获硅胶的软体变形,为接触感知提供了一种低成本解决方案。然而,从这些凝胶变形中精确推断剪切力和法向力分布尚未得到充分解决。在本工作中,我们提出了一种采用U-net架构的机器学习方法,直接从传感器的原始图像预测力分布。我们的模型基于从有限元分析推断的力分布进行训练,在预测商用GelSight Mini传感器的法向和剪切力分布方面展现出良好的准确性。该模型还显示出在不同压头、同类型传感器间的泛化潜力,并具备实现实时应用的潜力。代码库、数据集和模型均已开源,可通过https://feats-ai.github.io获取。