Despite the impressive performance across numerous tasks, large language models (LLMs) often fail in solving simple decision-making tasks due to the misalignment of the knowledge in LLMs with environments. On the contrary, reinforcement learning (RL) agents learn policies from scratch, which makes them always align with environments but difficult to incorporate prior knowledge for efficient explorations. To narrow the gap, we propose TWOSOME, a novel general online framework that deploys LLMs as decision-making agents to efficiently interact and align with embodied environments via RL without requiring any prepared datasets or prior knowledge of the environments. Firstly, we query the joint probabilities of each valid action with LLMs to form behavior policies. Then, to enhance the stability and robustness of the policies, we propose two normalization methods and summarize four prompt design principles. Finally, we design a novel parameter-efficient training architecture where the actor and critic share one frozen LLM equipped with low-rank adapters (LoRA) updated by PPO. We conduct extensive experiments to evaluate TWOSOME. i) TWOSOME exhibits significantly better sample efficiency and performance compared to the conventional RL method, PPO, and prompt tuning method, SayCan, in both classical decision-making environment, Overcooked, and simulated household environment, VirtualHome. ii) Benefiting from LLMs' open-vocabulary feature, TWOSOME shows superior generalization ability to unseen tasks. iii) Under our framework, there is no significant loss of the LLMs' original ability during online PPO finetuning.
翻译:尽管大语言模型(LLMs)在众多任务中展现出惊人性能,但由于其知识与环境之间的错配,这些模型在解决简单决策任务时常常失败。相比之下,强化学习(RL)智能体从零开始学习策略,这使其始终与环境对齐,但难以融入先验知识以实现高效探索。为缩小这一差距,我们提出TWOSOME——一种新颖的通用在线框架,该框架将LLMs部署为决策智能体,通过RL与具身环境高效交互并实现对齐,无需任何预准备数据集或环境先验知识。首先,我们查询每个有效动作在LLMs中的联合概率以形成行为策略。接着,为增强策略的稳定性与鲁棒性,我们提出两种归一化方法并总结出四条提示设计原则。最后,我们设计了一种新颖的参数高效训练架构,其中演员与评论家共享一个配备由PPO更新的低秩适配器(LoRA)的冻结LLM。我们通过大量实验评估TWOSOME:i) 在经典决策环境Overcooked与模拟家庭环境VirtualHome中,TWOSOME相比传统RL方法PPO及提示调优方法SayCan展现出显著更优的样本效率与性能;ii) 受益于LLMs的开放词汇特性,TWOSOME对未见任务表现出卓越的泛化能力;iii) 在我们的框架下,在线PPO微调过程中LLMs的原始能力未出现显著损失。