Developing interactive systems that leverage natural language instructions to solve complex robotic control tasks has been a long-desired goal in the robotics community. Large Language Models (LLMs) have demonstrated exceptional abilities in handling complex tasks, including logical reasoning, in-context learning, and code generation. However, predicting low-level robotic actions using LLMs poses significant challenges. Additionally, the complexity of such tasks usually demands the acquisition of policies to execute diverse subtasks and combine them to attain the ultimate objective. Hierarchical Reinforcement Learning (HRL) is an elegant approach for solving such tasks, which provides the intuitive benefits of temporal abstraction and improved exploration. However, HRL faces the recurring issue of non-stationarity due to unstable lower primitive behaviour. In this work, we propose LGR2, a novel HRL framework that leverages language instructions to generate a stationary reward function for the higher-level policy. Since the language-guided reward is unaffected by the lower primitive behaviour, LGR2 mitigates non-stationarity and is thus an elegant method for leveraging language instructions to solve robotic control tasks. To analyze the efficacy of our approach, we perform empirical analysis and demonstrate that LGR2 effectively alleviates non-stationarity in HRL. Our approach attains success rates exceeding 70$\%$ in challenging, sparse-reward robotic navigation and manipulation environments where the baselines fail to achieve any significant progress. Additionally, we conduct real-world robotic manipulation experiments and demonstrate that CRISP shows impressive generalization in real-world scenarios.
翻译:开发能够利用自然语言指令解决复杂机器人控制任务的交互系统,一直是机器人学领域长期追求的目标。大型语言模型(LLMs)在处理复杂任务方面展现出卓越能力,包括逻辑推理、上下文学习和代码生成。然而,使用LLMs预测底层机器人动作仍面临重大挑战。此外,此类任务的复杂性通常要求策略能够执行多样化子任务并将其组合以实现最终目标。分层强化学习(HRL)是解决此类任务的优雅方法,其具备时间抽象和改进探索的直观优势。但由于底层基元行为的不稳定性,HRL始终面临非平稳性问题。本文提出LGR2——一种新颖的HRL框架,该框架利用语言指令为高层策略生成平稳的奖励函数。由于语言引导的奖励不受底层基元行为影响,LGR2有效缓解了非平稳性问题,从而成为利用语言指令解决机器人控制任务的优雅方法。为验证方法的有效性,我们进行了实证分析并证明LGR2能有效缓解HRL中的非平稳性。在具有挑战性的稀疏奖励机器人导航与操作环境中,基线方法未能取得任何显著进展时,我们的方法获得了超过70$\%$的成功率。此外,我们开展了真实世界机器人操作实验,证明CRISP在真实场景中展现出优异的泛化能力。