Mathematical reasoning is a crucial capability for Large Language Models (LLMs), yet generating detailed and accurate reasoning traces remains a significant challenge. This paper introduces a novel approach to produce high-quality reasoning traces for LLM fine-tuning using online learning \textbf{Flows}. Our method employs an incremental output production Flow, where component LLMs collaboratively construct solutions through iterative communication. We train the Flow using online Direct Preference Optimization (DPO) learning with rollouts, generating DPO pairs for each training example and updating models in real-time. We directly compare the quality of reasoning traces generated by our method with those produced through direct model inference, demonstrating the effectiveness of our approach in improving LLM performance in mathematical reasoning tasks.
翻译:数学推理是大语言模型(LLM)的一项关键能力,然而生成详细且准确的推理过程仍是一个重大挑战。本文提出了一种新颖的方法,利用在线学习**流程**为LLM微调生成高质量的推理过程。我们的方法采用一种增量式输出生成流程,其中多个组件LLM通过迭代式通信协作构建解决方案。我们使用带有回放机制的在线直接偏好优化(DPO)学习来训练该流程,为每个训练样本生成DPO配对并实时更新模型。我们直接比较了本方法生成的推理过程与通过直接模型推断生成的推理过程的质量,证明了我们的方法在提升LLM数学推理任务性能方面的有效性。