In teleoperation of contact-rich manipulation tasks, selecting robot impedance is critical but difficult. The robot must be compliant to avoid damaging the environment, but stiff to remain responsive and to apply force when needed. In this paper, we present Stiffness Copilot, a vision-based policy for shared-control teleoperation in which the operator commands robot pose and the policy adjusts robot impedance online. To train Stiffness Copilot, we first infer direction-dependent stiffness matrices in simulation using privileged contact information. We then use these matrices to supervise a lightweight vision policy that predicts robot stiffness from wrist-camera images and transfers zero-shot to real images at runtime. In a human-subject study, Stiffness Copilot achieved safety comparable to using a constant low stiffness while matching the efficiency of using a constant high stiffness.
翻译:在接触式遥操作任务中,选择合适的机器人阻抗至关重要但十分困难。机器人必须具备柔顺性以避免损坏环境,同时又需保持足够刚度以确保响应能力并在必要时施加作用力。本文提出刚度副驾驶,一种基于视觉的共享控制遥操作策略:操作者负责控制机器人位姿,而该策略在线实时调节机器人阻抗。为训练刚度副驾驶,我们首先在仿真环境中利用特权接触信息推断出方向相关的刚度矩阵,随后以这些矩阵作为监督信号,训练一个轻量级视觉策略——该策略通过腕部摄像头图像预测机器人刚度,并在运行时实现真实图像的零样本迁移。在人体受试实验中,刚度副驾驶在保持与恒定低刚度相当的安全性的同时,达到了与恒定高刚度同等的操作效率。