Reinforcement Learning (RL) has demonstrated significant potential in enhancing the reasoning capabilities of large language models (LLMs). However, the success of RL for LLMs heavily relies on human-curated datasets and verifiable rewards, which limit their scalability and generality. Recent Self-Play RL methods, inspired by the success of the paradigm in games and Go, aim to enhance LLM reasoning capabilities without human-annotated data. However, their methods primarily depend on a grounded environment for feedback (e.g., a Python interpreter or a game engine); extending them to general domains remains challenging. To address these challenges, we propose Multi-Agent Evolve (MAE), a framework that enables LLMs to self-evolve in solving diverse tasks, including mathematics, reasoning, and general knowledge Q&A. The core design of MAE is based on a triplet of interacting agents (Proposer, Solver, Judge) that are instantiated from a single LLM, and applies reinforcement learning to optimize their behaviors. The Proposer generates questions, the Solver attempts solutions, and the Judge evaluates both while co-evolving. Experiments on Qwen2.5-3B-Instruct demonstrate that MAE achieves an average improvement of 4.54% on multiple benchmarks. These results highlight MAE as a scalable, data-efficient method for enhancing the general reasoning abilities of LLMs with minimal reliance on human-curated supervision.
翻译:强化学习在提升大语言模型推理能力方面已展现出显著潜力。然而,强化学习对大语言模型的应用效果高度依赖人工标注数据集和可验证的奖励信号,这限制了其扩展性与泛化能力。受博弈与围棋领域成功范式启发,近期提出的自我对弈强化学习方法旨在无需人工标注数据的情况下增强大语言模型推理能力。但现有方法主要依赖具身环境提供反馈(例如Python解释器或游戏引擎),将其扩展至通用领域仍面临挑战。为解决这些问题,我们提出多智能体进化框架,该框架使大语言模型能够在数学、推理及常识问答等多样化任务中实现自我进化。多智能体进化的核心设计基于从单一大语言模型实例化的三元交互智能体(提议者、求解者、评判者),并应用强化学习优化其行为。提议者生成问题,求解者尝试解答,评判者则在协同进化过程中对两者进行评估。基于Qwen2.5-3B-Instruct的实验表明,多智能体进化在多项基准测试中平均提升4.54%。这些结果凸显了多智能体进化作为一种可扩展、数据高效的方法,能够在最小化人工监督依赖的前提下增强大语言模型的通用推理能力。