Humans can steadily and gently grasp unfamiliar objects based on tactile perception. Robots still face challenges in achieving similar performance due to the difficulty of learning accurate grasp-force predictions and force control strategies that can be generalized from limited data. In this article, we propose an approach for learning grasping from ideal force control demonstrations, to achieve similar performance of human hands with limited data size. Our approach utilizes objects with known contact characteristics to automatically generate reference force curves without human demonstrations. In addition, we design the dual convolutional neural networks (Dual-CNN) architecture which incorporats a physics-based mechanics module for learning target grasping force predictions from demonstrations. The described method can be effectively applied in vision-based tactile sensors and enables gentle and stable grasping of objects from the ground. The described prediction model and grasping strategy were validated in offline evaluations and online experiments, and the accuracy and generalizability were demonstrated.
翻译:人类能够基于触觉感知稳定而轻柔地抓取陌生物体。由于难以从有限数据中学习精确的抓取力预测及可泛化的力控制策略,机器人要实现类似性能仍面临挑战。本文提出一种从理想力控演示中学习抓取的方法,旨在利用有限数据量实现与人手相近的性能。该方法利用已知接触特性的物体自动生成参考力曲线,无需人类演示。此外,我们设计了双卷积神经网络(Dual-CNN)架构,该架构融合了基于物理的力学模块,用于从演示中学习目标抓取力预测。所述方法可有效应用于基于视觉的触觉传感器,并实现从地面轻柔稳定地抓取物体。所提出的预测模型与抓取策略在离线评估与在线实验中均得到验证,其准确性与泛化能力得到了证明。