Solving mathematical problems requires advanced reasoning abilities and presents notable challenges for large language models. Previous works usually synthesize data from proprietary models to augment existing datasets, followed by instruction tuning to achieve top-tier results. However, our analysis of these datasets reveals severe biases towards easy queries, with frequent failures to generate any correct response for the most challenging queries. Hypothesizing that difficult queries are crucial to learn complex reasoning, we propose Difficulty-Aware Rejection Tuning (DART), a method that allocates difficult queries more trials during the synthesis phase, enabling more extensive training on difficult samples. Utilizing DART, we have created new datasets for mathematical problem-solving that focus more on difficult queries and are substantially smaller than previous ones. Remarkably, our synthesis process solely relies on a 7B-sized open-weight model, without reliance on the commonly used proprietary GPT-4. We fine-tune various base models on our datasets ranging from 7B to 70B in size, resulting in a series of strong models called DART-MATH. In comprehensive in-domain and out-of-domain evaluation on 6 mathematical benchmarks, DART-MATH outperforms vanilla rejection tuning significantly, being superior or comparable to previous arts, despite using much smaller datasets and no proprietary models. Furthermore, our results position our synthetic datasets as the most effective and cost-efficient publicly available resources for advancing mathematical problem-solving.
翻译:解决数学问题需要高级推理能力,这对大语言模型构成了显著挑战。先前的研究通常通过合成专有模型的数据来扩展现有数据集,然后通过指令调优来取得顶尖结果。然而,我们对这些数据集的分析表明,它们存在严重偏向于简单查询的偏差,对于最具挑战性的查询,经常无法生成任何正确答案。我们假设困难查询对于学习复杂推理至关重要,因此提出了难度感知拒绝调优(DART)。该方法在合成阶段为困难查询分配更多尝试次数,从而能够在困难样本上进行更广泛的训练。利用DART,我们创建了专注于困难查询且规模远小于先前数据集的新数学问题求解数据集。值得注意的是,我们的合成过程完全依赖于一个7B规模的开源权重模型,无需依赖常用的专有GPT-4。我们在从7B到70B规模不等的多个基础模型上使用我们的数据集进行微调,得到了一系列名为DART-MATH的强大模型。在6个数学基准测试上进行的全面领域内和领域外评估表明,DART-MATH显著优于普通拒绝调优方法,与现有先进方法相比具有优势或相当,尽管其使用的数据集规模小得多且未使用任何专有模型。此外,我们的结果表明,我们的合成数据集是推动数学问题求解领域发展最有效且最具成本效益的公开可用资源。