In image-based robot manipulation tasks with large observation and action spaces, reinforcement learning struggles with low sample efficiency, slow training speed, and uncertain convergence. As an alternative, large pre-trained foundation models have shown promise in robotic manipulation, particularly in zero-shot and few-shot applications. However, using these models directly is unreliable due to limited reasoning capabilities and challenges in understanding physical and spatial contexts. This paper introduces ExploRLLM, a novel approach that leverages the inductive bias of foundation models (e.g. Large Language Models) to guide exploration in reinforcement learning. We also exploit these foundation models to reformulate the action and observation spaces to enhance the training efficiency in reinforcement learning. Our experiments demonstrate that guided exploration enables much quicker convergence than training without it. Additionally, we validate that ExploRLLM outperforms vanilla foundation model baselines and that the policy trained in simulation can be applied in real-world settings without additional training.
翻译:在基于图像的机器人操作任务中,由于观测空间和动作空间庞大,强化学习面临样本效率低、训练速度慢以及收敛不确定性等挑战。作为替代方案,大型预训练基础模型在机器人操作领域展现出潜力,尤其在零样本和少样本应用中。然而,由于推理能力有限且难以理解物理及空间上下文,直接使用这些模型并不可靠。本文提出一种新型方法ExploRLLM,通过利用基础模型(如大语言模型)的归纳偏置来指导强化学习中的探索过程。同时,我们借助这些基础模型重新定义动作与观测空间,以提升强化学习的训练效率。实验表明,相比无引导的探索,引导式探索能够显著加快收敛速度。此外,我们验证了ExploRLLM优于原始基础模型基线,并且在仿真环境中训练的策略可直接应用于真实场景,无需额外训练。