The creation of high-quality datasets to improve Large Language Model (LLM) reasoning remains a significant challenge, as current methods often suffer from generating low-quality/incorrect answers and limited information richness from available data sources. To address this, we propose AgenticMath, a novel agentic pipeline for generating high-quality mathematical question-answer pairs to enhance the supervised fine-tuning of LLMs. Our method operates through four stages: (1) Seed Question Filter that selects questions with high information richness, complexity, and clarity; (2) an Agentic Question Rephrase step that employs a multi-agent system to generate diverse, logically consistent paraphrases; (3) an Answer Augment step where rewrite answers using chain-of-thought reasoning to enhance numerical and logical correctness, without reliance on human-provided labels; and (4) a final Question and Answer Evaluation that retains only the most superior pairs. Extensive experiments demonstrate that, fine-tuning 3B-8B parameter LLMs on AgenticMath generated datasets (comprising only 30-60K math samples) achieves competitive or superior performance on diverse in domain and out-of-domain mathematical reasoning benchmarks compared to baselines trained on much more data (e.g., 400K or 2.3M samples). Our work demonstrates that targeted, high-quality data generation is a more efficient path to improving mathematical reasoning in LLMs than large-scale, low-quality alternatives.
翻译:高质量数据集的构建对于提升大语言模型(LLM)的推理能力仍是一个重大挑战,因为现有方法通常存在生成答案质量低/错误率高以及从可用数据源中获取信息丰富度有限的问题。为解决这一问题,我们提出了AgenticMath,一种新颖的智能体流程,用于生成高质量的数学问答对以增强LLM的监督微调。我们的方法通过四个阶段运行:(1)种子问题筛选器,选择具有高信息丰富度、复杂性和清晰度的问题;(2)智能体问题重述步骤,采用多智能体系统生成多样化、逻辑一致的释义;(3)答案增强步骤,利用思维链推理重写答案以提高数值和逻辑正确性,且不依赖人工提供的标签;(4)最终的问答评估,仅保留最优的问答对。大量实验表明,在AgenticMath生成的数据集(仅包含30-60K数学样本)上对3B-8B参数的LLM进行微调,在多样化的领域内和领域外数学推理基准测试中,相较于使用更大量数据(例如400K或2.3M样本)训练的基线模型,取得了具有竞争力或更优的性能。我们的工作表明,与大规模、低质量的替代方案相比,有针对性的高质量数据生成是提升LLM数学推理能力更高效的途径。