Manipulation tasks often consist of subtasks, each representing a distinct skill. Mastering these skills is essential for robots, as it enhances their autonomy, efficiency, adaptability, and ability to work in their environment. Learning from demonstrations allows robots to rapidly acquire new skills without starting from scratch, with demonstrations typically sequencing skills to achieve tasks. Behaviour cloning approaches to learning from demonstration commonly rely on mixture density network output heads to predict robot actions. In this work, we first reinterpret the mixture density network as a library of feedback controllers (or skills) conditioned on latent states. This arises from the observation that a one-layer linear network is functionally equivalent to a classical feedback controller, with network weights corresponding to controller gains. We use this insight to derive a probabilistic graphical model that combines these elements, describing the skill acquisition process as segmentation in a latent space, where each skill policy functions as a feedback control law in this latent space. Our approach significantly improves not only task success rate, but also robustness to observation noise when trained with human demonstrations. Our physical robot experiments further show that the induced robustness improves model deployment on robots.
翻译:操作任务通常由多个子任务构成,每个子任务代表一项特定技能。掌握这些技能对机器人至关重要,它能提升机器人的自主性、效率、适应性和环境工作能力。通过示教学习,机器人能够快速掌握新技能而无需从零开始,示教过程通常通过技能序列组合来实现任务目标。基于行为克隆的示教学习方法通常依赖混合密度网络输出头来预测机器人动作。在本研究中,我们首先将混合密度网络重新阐释为基于隐状态的反馈控制器(或技能)库。这一观点源于以下观察:单层线性网络在功能上等同于经典反馈控制器,其网络权重对应控制器增益。基于这一洞见,我们推导出一个融合这些要素的概率图模型,将技能获取过程描述为隐空间中的分割问题,其中每个技能策略在此隐空间中作为反馈控制律发挥作用。我们的方法不仅显著提升了任务成功率,而且在基于人类示教数据训练时增强了对观测噪声的鲁棒性。实体机器人实验进一步表明,所提升的鲁棒性改善了模型在机器人上的部署效果。