We introduce LADDER (Learning through Autonomous Difficulty-Driven Example Recursion), a framework which enables Large Language Models to autonomously improve their problem-solving capabilities through self-guided learning by recursively generating and solving progressively simpler variants of complex problems. Unlike prior approaches that require curated datasets or human feedback, LADDER leverages a model's own capabilities to generate easier question variants. We demonstrate LADDER's effectiveness in the subject of mathematical integration, improving Llama 3.2 3B's accuracy from 1% to 82% on undergraduate-level problems and enabling Qwen2.5 7B Deepseek-R1 Distilled to achieve 73% on the MIT Integration Bee qualifying examination. We also introduce TTRL (Test-Time Reinforcement Learning), where we perform reinforcement learning on variants of test problems at inference time. TTRL enables Qwen2.5 7B Deepseek-R1 Distilled to achieve a state-of-the-art score of 90% on the MIT Integration Bee qualifying examination, surpassing OpenAI o1's performance. These results show how self-directed strategic learning can achieve significant capability improvements without relying on architectural scaling or human supervision.
翻译:我们提出LADDER(通过自主难度驱动示例递归学习)框架,该框架使大语言模型能够通过递归生成并逐步求解复杂问题的简化变体,从而以自引导学习方式自主提升其问题解决能力。与需要人工标注数据集或人类反馈的现有方法不同,LADDER利用模型自身能力生成更简单的问题变体。我们在数学积分领域验证了LADDER的有效性:将Llama 3.2 3B模型在本科难度问题上的准确率从1%提升至82%,并使Qwen2.5 7B Deepseek-R1 Distilled模型在MIT积分竞赛资格赛上达到73%的准确率。同时我们提出TTRL(测试时强化学习)方法,在推理阶段对测试问题的变体进行强化学习。该方法使Qwen2.5 7B Deepseek-R1 Distilled模型在MIT积分竞赛资格赛上取得90%的顶尖成绩,超越了OpenAI o1的表现。这些结果表明,自导向策略学习无需依赖架构扩展或人工监督即可实现显著的能力提升。