Large Language Models have demonstrated outstanding performance across various downstream tasks and have been widely applied in multiple scenarios. Human-annotated preference data is used for training to further improve LLMs' performance, which is constrained by the upper limit of human performance. Therefore, Self-Rewarding method has been proposed, where LLMs generate training data by rewarding their own outputs. However, the existing self-rewarding paradigm is not effective in mathematical reasoning scenarios and may even lead to a decline in performance. In this work, we propose the Process-based Self-Rewarding pipeline for language models, which introduces long-thought reasoning, step-wise LLM-as-a-Judge, and step-wise preference optimization within the self-rewarding paradigm. Our new paradigm successfully enhances the performance of LLMs on multiple mathematical reasoning benchmarks through iterative Process-based Self-Rewarding, demonstrating the immense potential of self-rewarding to achieve LLM reasoning that may surpass human capabilities.
翻译:大型语言模型已在多种下游任务中展现出卓越性能,并广泛应用于各类场景。为进一步提升LLMs的性能,通常采用人工标注的偏好数据进行训练,但这种方法受限于人类性能的上限。为此,研究者提出了自奖励方法,即让LLMs通过奖励自身输出来生成训练数据。然而,现有的自奖励范式在数学推理场景中效果有限,甚至可能导致性能下降。本研究提出面向语言模型的基于过程的自奖励流程,该流程在自奖励范式中引入了长链推理、分步LLM即评判器以及分步偏好优化机制。通过迭代式的基于过程的自奖励训练,我们的新范式成功提升了LLMs在多个数学推理基准测试中的性能,证明了自奖励方法在实现可能超越人类能力的LLM推理方面具有巨大潜力。