We present Klear-Reasoner, a model with long reasoning capabilities that demonstrates careful deliberation during problem solving, achieving outstanding performance across multiple benchmarks. Although there are already many excellent works related to inference models in the current community, there are still many problems with reproducing high-performance inference models due to incomplete disclosure of training details. This report provides an in-depth analysis of the reasoning model, covering the entire post-training workflow from data preparation and long Chain-of-Thought supervised fine-tuning (long CoT SFT) to reinforcement learning (RL), along with detailed ablation studies for each experimental component. For SFT data, our experiments show that a small number of high-quality data sources are more effective than a large number of diverse data sources, and that difficult samples can achieve better results without accuracy filtering. In addition, we investigate two key issues with current clipping mechanisms in RL: Clipping suppresses critical exploration signals and ignores suboptimal trajectories. To address these challenges, we propose Gradient-Preserving clipping Policy Optimization (GPPO) that gently backpropagates gradients from clipped tokens. GPPO not only enhances the model's exploration capacity but also improves its efficiency in learning from negative samples. Klear-Reasoner exhibits exceptional reasoning abilities in mathematics and programming, scoring 90.5% on AIME 2024, 83.2% on AIME 2025, 66.0% on LiveCodeBench V5 and 58.1% on LiveCodeBench V6.
翻译:本文提出Klear-Reasoner,这是一个具备长推理能力的模型,在问题求解过程中展现出审慎的思考特性,并在多项基准测试中取得了卓越性能。尽管当前学界已有许多优秀的推理模型相关研究,但由于训练细节披露不完整,复现高性能推理模型仍存在诸多困难。本报告对推理模型进行了深入分析,涵盖了从数据准备、长链思维监督微调(long CoT SFT)到强化学习(RL)的完整后训练流程,并对每个实验模块进行了详细的消融研究。在SFT数据方面,实验表明少量高质量数据源比大量多样化数据源更有效,且困难样本无需经过准确率筛选即可获得更好效果。此外,我们研究了当前RL裁剪机制存在的两个关键问题:裁剪会抑制关键探索信号,并忽略次优轨迹。为解决这些挑战,我们提出梯度保留裁剪策略优化(GPPO),该方法能温和地反向传播来自裁剪标记的梯度。GPPO不仅增强了模型的探索能力,还提升了其从负样本中学习的效率。Klear-Reasoner在数学和编程领域展现出卓越的推理能力,在AIME 2024上获得90.5%的得分,在AIME 2025上获得83.2%,在LiveCodeBench V5上获得66.0%,在LiveCodeBench V6上获得58.1%。