While closed-source Large Language Models (LLMs) demonstrate strong mathematical problem-solving abilities, open-source models continue to struggle with such tasks. To bridge this gap, we propose a data augmentation approach and introduce PersonaMathQA, a dataset derived from MATH and GSM8K, on which we train the PersonaMath models. Our approach consists of two stages: the first stage is learning from Persona Diversification, and the second stage is learning from Reflection. In the first stage, we regenerate detailed chain-of-thought (CoT) solutions as instructions using a closed-source LLM and introduce a novel persona-driven data augmentation technique to enhance the dataset's quantity and diversity. In the second stage, we incorporate reflection to fully leverage more challenging and valuable questions. Evaluation of our PersonaMath models on MATH and GSM8K reveals that the PersonaMath-7B model (based on LLaMA-2-7B) achieves an accuracy of 24.2% on MATH and 68.7% on GSM8K, surpassing all baseline methods and achieving state-of-the-art performance. Notably, our dataset contains only 70.3K data points-merely 17.8% of MetaMathQA and 27% of MathInstruct-yet our model outperforms these baselines, demonstrating the high quality and diversity of our dataset, which enables more efficient model training. We open-source the PersonaMathQA dataset, PersonaMath models, and our code for public usage.
翻译:尽管闭源大型语言模型(LLM)展现出强大的数学问题解决能力,开源模型在此类任务上仍面临困难。为弥合这一差距,我们提出一种数据增强方法,并基于MATH和GSM8K数据集构建了PersonaMathQA数据集,在此基础上训练了PersonaMath系列模型。我们的方法包含两个阶段:第一阶段为角色多样化学习,第二阶段为反思学习。在第一阶段,我们使用闭源LLM重新生成详细的思维链(CoT)解作为指令,并引入新颖的角色驱动数据增强技术以提升数据集的数量与多样性。在第二阶段,我们融入反思机制以充分利用更具挑战性和价值的问题。在MATH和GSM8K数据集上的评估表明,基于LLaMA-2-7B的PersonaMath-7B模型在MATH上达到24.2%的准确率,在GSM8K上达到68.7%的准确率,超越了所有基线方法并实现了最先进的性能。值得注意的是,我们的数据集仅包含70.3K条数据(仅为MetaMathQA的17.8%和MathInstruct的27%),但模型性能仍优于这些基线,这证明了我们数据集的高质量与多样性,能够实现更高效的模型训练。我们已开源PersonaMathQA数据集、PersonaMath模型及相关代码供公众使用。