Measuring grasp stability is an important skill for dexterous robot manipulation tasks, which can be inferred from haptic information with a tactile sensor. Control policies have to detect rotational displacement and slippage from tactile feedback, and determine a re-grasp strategy in term of location and force. Classic stable grasp task only trains control policies to solve for re-grasp location with objects of fixed center of gravity. In this work, we propose a revamped version of stable grasp task that optimises both re-grasp location and gripping force for objects with unknown and moving center of gravity. We tackle this task with a model-free, end-to-end Transformer-based reinforcement learning framework. We show that our approach is able to solve both objectives after training in both simulation and in a real-world setup with zero-shot transfer. We also provide performance analysis of different models to understand the dynamics of optimizing two opposing objectives.
翻译:测量抓取稳定性是灵巧机器人操作任务中的一项重要技能,可通过触觉传感器从触感信息中推断。控制策略必须从触觉反馈中检测旋转位移和滑动,并根据位置和力确定重新抓取策略。经典的稳定抓取任务仅训练控制策略解决固定重心物体的重新抓取位置问题。在本工作中,我们提出了一种改进的稳定抓取任务版本,该任务针对重心未知且移动的物体同时优化重新抓取位置和夹持力。我们采用一种无模型的端到端基于Transformer的强化学习框架来解决此任务。我们证明,经过仿真和现实世界设置中的训练后,我们的方法能够以零样本迁移的方式同时解决这两个目标。我们还提供了不同模型的性能分析,以理解优化两个对立目标的动态过程。