General reasoning represents a long-standing and formidable challenge in artificial intelligence. Recent breakthroughs, exemplified by large language models (LLMs) and chain-of-thought prompting, have achieved considerable success on foundational reasoning tasks. However, this success is heavily contingent upon extensive human-annotated demonstrations, and models' capabilities are still insufficient for more complex problems. Here we show that the reasoning abilities of LLMs can be incentivized through pure reinforcement learning (RL), obviating the need for human-labeled reasoning trajectories. The proposed RL framework facilitates the emergent development of advanced reasoning patterns, such as self-reflection, verification, and dynamic strategy adaptation. Consequently, the trained model achieves superior performance on verifiable tasks such as mathematics, coding competitions, and STEM fields, surpassing its counterparts trained via conventional supervised learning on human demonstrations. Moreover, the emergent reasoning patterns exhibited by these large-scale models can be systematically harnessed to guide and enhance the reasoning capabilities of smaller models.
翻译:通用推理是人工智能领域一个长期存在且艰巨的挑战。以大型语言模型(LLMs)和思维链提示为代表的近期突破,在基础推理任务上取得了显著成功。然而,这种成功在很大程度上依赖于大量的人工标注演示,并且模型在处理更复杂问题时的能力仍然不足。本文表明,可以通过纯强化学习(RL)来激励LLMs的推理能力,从而无需人工标注的推理轨迹。所提出的RL框架促进了高级推理模式的出现,例如自我反思、验证和动态策略适应。因此,经过训练的模型在可验证任务(如数学、编程竞赛和STEM领域)上取得了卓越的性能,超越了通过传统监督学习在人类演示上训练的同类模型。此外,这些大规模模型所展现出的涌现推理模式可以被系统地利用,以指导和增强较小模型的推理能力。