With the introduction of large language models (LLMs), automatic math reasoning has seen tremendous success. However, current methods primarily focus on providing solutions or using techniques like Chain-of-Thought to enhance problem-solving accuracy. In this paper, we focus on improving the capability of mathematics teaching via a Socratic teaching-based LLM (\texttt{SocraticLLM}), which guides learners toward profound thinking with clarity and self-discovery via conversation. We collect and release a high-quality mathematical teaching dataset, named \texttt{SocraticMATH}, which provides Socratic-style conversations of problems with extra knowledge. Also, we propose a knowledge-enhanced LLM as a strong baseline to generate reliable responses with review, guidance/heuristic, rectification, and summarization. Experimental results show the great advantages of \texttt{SocraticLLM} by comparing it with several strong generative models. The codes and datasets are available on \url{https://github.com/ECNU-ICALK/SocraticMath}.
翻译:随着大语言模型(LLM)的引入,自动数学推理取得了巨大成功。然而,当前方法主要集中于提供解决方案或使用诸如思维链等技术来提高解题准确性。本文聚焦于通过一种基于苏格拉底教学法的大语言模型(\texttt{SocraticLLM})来提升数学教学能力,该模型通过对话引导学习者进行清晰、自我发现的深度思考。我们收集并发布了一个高质量的数学教学数据集,命名为 \texttt{SocraticMATH},该数据集提供了包含额外知识的苏格拉底式问题对话。此外,我们提出了一种知识增强的大语言模型作为强基线,以生成包含审阅、引导/启发、纠正和总结的可靠回答。实验结果表明,通过与多个强生成模型进行比较,\texttt{SocraticLLM} 展现出显著优势。代码和数据集可在 \url{https://github.com/ECNU-ICALK/SocraticMath} 获取。