In this work we consider Code World Models, world models generated by a Large Language Model (LLM) in the form of Python code for model-based Reinforcement Learning (RL). Calling code instead of LLMs for planning has potential to be more precise, reliable, interpretable, and extremely efficient. However, writing appropriate Code World Models requires the ability to understand complex instructions, to generate exact code with non-trivial logic and to self-debug a long program with feedback from unit tests and environment trajectories. To address these challenges, we propose Generate, Improve and Fix with Monte Carlo Tree Search (GIF-MCTS), a new code generation strategy for LLMs. To test our approach in an offline RL setting, we introduce the Code World Models Benchmark (CWMB), a suite of program synthesis and planning tasks comprised of 18 diverse RL environments paired with corresponding textual descriptions and curated trajectories. GIF-MCTS surpasses all baselines on the CWMB and two other benchmarks, and we show that the Code World Models synthesized with it can be successfully used for planning, resulting in model-based RL agents with greatly improved sample efficiency and inference speed.
翻译:本文研究代码世界模型,即由大语言模型以Python代码形式生成的、用于基于模型的强化学习的世界模型。相比调用大语言模型进行规划,调用代码具有更精确、可靠、可解释及高效的优势。然而,编写合适的代码世界模型需要具备理解复杂指令、生成具有非平凡逻辑的精确代码,以及通过单元测试和环境轨迹反馈进行长程序自我调试的能力。为应对这些挑战,我们提出了基于蒙特卡洛树搜索的生成-改进-修复策略,这是一种面向大语言模型的新型代码生成方法。为在离线强化学习设置中测试我们的方法,我们构建了代码世界模型基准测试集,该基准包含18个多样化强化学习环境及其对应文本描述与精选轨迹组成的程序合成与规划任务集合。GIF-MCTS在CWMB及另外两个基准测试中均超越所有基线方法,实验表明通过该方法合成的代码世界模型可成功用于规划任务,从而构建出具有显著提升的样本效率与推理速度的基于模型的强化学习智能体。