Imitation learning is one of the methods for reproducing human demonstration adaptively in robots. So far, it has been found that generalization ability of the imitation learning enables the robots to perform tasks adaptably in untrained environments. However, motion styles such as motion trajectory and the amount of force applied depend largely on the dataset of human demonstration, and settle down to an average motion style. In this study, we propose a method that adds parametric bias to the conventional imitation learning network and can add constraints to the motion style. By experiments using PR2 and the musculoskeletal humanoid MusashiLarm, we show that it is possible to perform tasks by changing its motion style as intended with constraints on joint velocity, muscle length velocity, and muscle tension.
翻译:模仿学习是使机器人自适应复现人类演示的方法之一。迄今研究发现,模仿学习的泛化能力可使机器人在未经训练的环境中自适应执行任务。然而,运动轨迹与施力大小等运动风格在很大程度上依赖于人类演示数据集,最终往往收敛于平均运动风格。本研究提出在传统模仿学习网络中引入参数偏置的方法,从而实现对运动风格施加约束。通过使用PR2与肌肉骨骼仿人机器人MusashiLarm进行实验,我们证明了该方法能够在关节速度、肌肉长度变化率及肌肉张力等约束条件下,按预期改变运动风格以执行任务。