Learning from Demonstrations (LfD) and Reinforcement Learning (RL) have enabled robot agents to accomplish complex tasks. Reward Machines (RMs) enhance RL's capability to train policies over extended time horizons by structuring high-level task information. In this work, we introduce a novel LfD approach for learning RMs directly from visual demonstrations of robotic manipulation tasks. Unlike previous methods, our approach requires no predefined propositions or prior knowledge of the underlying sparse reward signals. Instead, it jointly learns the RM structure and identifies key high-level events that drive transitions between RM states. We validate our method on vision-based manipulation tasks, showing that the inferred RM accurately captures task structure and enables an RL agent to effectively learn an optimal policy.
翻译:从演示中学习(LfD)和强化学习(RL)使机器人智能体能够完成复杂任务。奖励机器(RMs)通过结构化高层任务信息,增强了RL在长时间跨度上训练策略的能力。本研究提出一种新颖的LfD方法,可直接从机器人操作任务的视觉演示中学习RMs。与先前方法不同,我们的方法无需预定义命题或底层稀疏奖励信号的先验知识,而是联合学习RM结构并识别驱动RM状态间转换的关键高层事件。我们在基于视觉的操作任务上验证了该方法,表明推断出的RM能准确捕捉任务结构,并使RL智能体有效学习最优策略。