The use of Large Language Models (LLMs) in mathematical reasoning has become a cornerstone of related research, demonstrating the intelligence of these models and enabling potential practical applications through their advanced performance, such as in educational settings. Despite the variety of datasets and in-context learning algorithms designed to improve the ability of LLMs to automate mathematical problem solving, the lack of comprehensive benchmarking across different datasets makes it complicated to select an appropriate model for specific tasks. In this project, we present a benchmark that fairly compares seven state-of-the-art in-context learning algorithms for mathematical problem solving across five widely used mathematical datasets on four powerful foundation models. Furthermore, we explore the trade-off between efficiency and performance, highlighting the practical applications of LLMs for mathematical reasoning. Our results indicate that larger foundation models like GPT-4o and LLaMA 3-70B can solve mathematical reasoning independently from the concrete prompting strategy, while for smaller models the in-context learning approach significantly influences the performance. Moreover, the optimal prompt depends on the chosen foundation model. We open-source our benchmark code to support the integration of additional models in future research.
翻译:大型语言模型(LLMs)在数学推理中的应用已成为相关研究的基石,这些模型通过其卓越性能(例如在教育场景中)展现了智能水平,并实现了潜在的实际应用。尽管已有多种数据集和上下文学习算法旨在提升LLMs自动求解数学问题的能力,但由于缺乏跨数据集的全面基准测试,针对特定任务选择合适的模型变得复杂。本项目提出一个基准测试框架,在四个强大的基础模型上,对五种广泛使用的数学数据集,公平比较了七种最先进的上下文学习算法在数学问题求解中的表现。此外,我们探讨了效率与性能之间的权衡,突显了LLMs在数学推理中的实际应用价值。实验结果表明,如GPT-4o和LLaMA 3-70B等大型基础模型能够独立于具体提示策略解决数学推理问题,而对于较小规模的模型,上下文学习方法则显著影响其性能。此外,最优提示策略取决于所选的基础模型。我们开源了基准测试代码,以支持未来研究中更多模型的集成。