Many reinforcement learning environments (e.g., Minecraft) provide only sparse rewards that indicate task completion or failure with binary values. The challenge in exploration efficiency in such environments makes it difficult for reinforcement-learning-based agents to learn complex tasks. To address this, this paper introduces an advanced learning system, named Auto MC-Reward, that leverages Large Language Models (LLMs) to automatically design dense reward functions, thereby enhancing the learning efficiency. Auto MC-Reward consists of three important components: Reward Designer, Reward Critic, and Trajectory Analyzer. Given the environment information and task descriptions, the Reward Designer first design the reward function by coding an executable Python function with predefined observation inputs. Then, our Reward Critic will be responsible for verifying the code, checking whether the code is self-consistent and free of syntax and semantic errors. Further, the Trajectory Analyzer summarizes possible failure causes and provides refinement suggestions according to collected trajectories. In the next round, Reward Designer will further refine and iterate the dense reward function based on feedback. Experiments demonstrate a significant improvement in the success rate and learning efficiency of our agents in complex tasks in Minecraft, such as obtaining diamond with the efficient ability to avoid lava, and efficiently explore trees and animals that are sparse in the plains biome.
翻译:许多强化学习环境(例如Minecraft)仅提供稀疏奖励,通过二元值表示任务完成或失败。此类环境中的探索效率挑战使得基于强化学习的智能体难以掌握复杂任务。为解决这一问题,本文提出名为Auto MC-Reward的先进学习系统,利用大语言模型自动设计稠密奖励函数,从而提升学习效率。Auto MC-Reward包含三个关键组件:奖励设计器、奖励评估器和轨迹分析器。在环境信息与任务描述输入下,奖励设计器首先通过编写可执行的Python函数(基于预定义观测输入)设计奖励函数。随后奖励评估器负责验证代码,检查其自洽性及语法语义错误。进一步地,轨迹分析器根据收集的轨迹总结可能的失败原因并提供优化建议。在下一轮迭代中,奖励设计器将基于反馈进一步优化和迭代稠密奖励函数。实验表明,我们的智能体在Minecraft复杂任务中(例如高效避开水获取钻石、在稀疏的平原生物群系中探索树木和动物)的成功率与学习效率均获得显著提升。