Large language models (LLMs) have recently transformed from text-based assistants to autonomous agents capable of planning, reasoning, and iteratively improving their actions. While numerical reward signals and verifiers can effectively rank candidate actions, they often provide limited contextual guidance. In contrast, natural language feedback better aligns with the generative capabilities of LLMs, providing richer and more actionable suggestions. However, parsing and implementing this feedback effectively can be challenging for LLM-based agents. In this work, we introduce Critique-Guided Improvement (CGI), a novel two-player framework, comprising an actor model that explores an environment and a critic model that generates detailed nature language feedback. By training the critic to produce fine-grained assessments and actionable revisions, and the actor to utilize these critiques, our approach promotes more robust exploration of alternative strategies while avoiding local optima. Experiments in three interactive environments show that CGI outperforms existing baselines by a substantial margin. Notably, even a small critic model surpasses GPT-4 in feedback quality. The resulting actor achieves state-of-the-art performance, demonstrating the power of explicit iterative guidance to enhance decision-making in LLM-based agents.
翻译:大型语言模型(LLM)近期已从基于文本的助手转变为能够规划、推理并迭代改进其行为的自主智能体。虽然数值奖励信号和验证器能有效对候选行动进行排序,但它们通常提供的上下文指导有限。相比之下,自然语言反馈能更好地与LLM的生成能力对齐,提供更丰富且更具可操作性的建议。然而,对于基于LLM的智能体而言,有效解析并实施这种反馈可能具有挑战性。本研究提出批判引导改进(CGI)这一新颖的双智能体框架,包含探索环境的执行者模型和生成详细自然语言反馈的批判者模型。通过训练批判者生成细粒度评估与可执行的修订建议,并训练执行者利用这些批判,我们的方法促进了更稳健的替代策略探索,同时避免陷入局部最优。在三个交互环境中的实验表明,CGI显著优于现有基线方法。值得注意的是,即使小型批判者模型在反馈质量上也超越了GPT-4。最终训练的执行者实现了最先进的性能,这证明了显式迭代指导对于增强基于LLM智能体决策能力的强大效力。