The automatic generation of hints by Large Language Models (LLMs) within Intelligent Tutoring Systems (ITSs) has shown potential to enhance student learning. However, generating pedagogically sound hints that address student misconceptions and adhere to specific educational objectives remains challenging. This work explores using LLMs (GPT-4o and Llama-3-8B-instruct) as teachers to generate effective hints for students simulated through LLMs (GPT-3.5-turbo, Llama-3-8B-Instruct, or Mistral-7B-instruct-v0.3) tackling math exercises designed for human high-school students, and designed using cognitive science principles. We present here the study of several dimensions: 1) identifying error patterns made by simulated students on secondary-level math exercises; 2) developing various prompts for GPT-4o as a teacher and evaluating their effectiveness in generating hints that enable simulated students to self-correct; and 3) testing the best-performing prompts, based on their ability to produce relevant hints and facilitate error correction, with Llama-3-8B-Instruct as the teacher, allowing for a performance comparison with GPT-4o. The results show that model errors increase with higher temperature settings. Notably, when hints are generated by GPT-4o, the most effective prompts include prompts tailored to specific errors as well as prompts providing general hints based on common mathematical errors. Interestingly, Llama-3-8B-Instruct as a teacher showed better overall performance than GPT-4o. Also the problem-solving and response revision capabilities of the LLMs as students, particularly GPT-3.5-turbo, improved significantly after receiving hints, especially at lower temperature settings. However, models like Mistral-7B-Instruct demonstrated a decline in performance as the temperature increased.
翻译:大型语言模型在智能导学系统内自动生成提示已显示出提升学生学习的潜力。然而,生成能够针对学生误解并遵循特定教育目标、符合教学原理的提示仍然具有挑战性。本研究探索使用大型语言模型作为教师,为模拟学生生成有效提示。这些模拟学生由大型语言模型扮演,旨在解决为人类高中生设计、并依据认知科学原理构建的数学练习题。我们在此呈现对以下几个维度的研究:1) 识别模拟学生在中学数学练习题上出现的错误模式;2) 为作为教师的GPT-4o开发多种提示,并评估其生成能使模拟学生自我纠正的提示的有效性;3) 使用表现最佳的提示,基于其生成相关提示和促进纠错的能力,让Llama-3-8B-Instruct作为教师进行测试,从而与GPT-4o进行性能比较。结果表明,模型的错误率随温度设置升高而增加。值得注意的是,当提示由GPT-4o生成时,最有效的提示包括针对特定错误的提示以及基于常见数学错误提供一般性提示的提示。有趣的是,Llama-3-8B-Instruct作为教师表现出比GPT-4o更好的整体性能。此外,作为学生的大型语言模型,特别是GPT-3.5-turbo,在收到提示后,其问题解决和答案修订能力显著提高,尤其是在较低温度设置下。然而,像Mistral-7B-Instruct这样的模型,其性能随着温度升高而下降。