Timely and accurate identification of student misconceptions is key to improving learning outcomes and pre-empting the compounding of student errors. However, this task is highly dependent on the effort and intuition of the teacher. In this work, we present a novel approach for detecting misconceptions from student-tutor dialogues using large language models (LLMs). First, we use a fine-tuned LLM to generate plausible misconceptions, and then retrieve the most promising candidates among these using embedding similarity with the input dialogue. These candidates are then assessed and re-ranked by another fine-tuned LLM to improve misconception relevance. Empirically, we evaluate our system on real dialogues from an educational tutoring platform. We consider multiple base LLM models including LLaMA, Qwen and Claude on zero-shot and fine-tuned settings. We find that our approach improves predictive performance over baseline models and that fine-tuning improves both generated misconception quality and can outperform larger closed-source models. Finally, we conduct ablation studies to both validate the importance of our generation and reranking steps on misconception generation quality.
翻译:及时准确地识别学生的迷思概念是提升学习成效、防止学生错误累积的关键。然而,这项任务高度依赖于教师的努力与直觉。本研究提出一种利用大语言模型从学生-辅导者对话中检测迷思概念的新方法。首先,我们使用微调后的大语言模型生成可能的迷思概念,然后通过嵌入向量与输入对话的相似性检索其中最具潜力的候选概念。接着,这些候选概念由另一个微调后的大语言模型进行评估和重排序,以提升迷思概念的相关性。我们在真实教育辅导平台的对话数据上对系统进行了实证评估,考察了包括LLaMA、Qwen和Claude在内的多种基础大语言模型在零样本和微调设置下的表现。实验表明,该方法相比基线模型提升了预测性能,且微调操作不仅改善了生成迷思概念的质量,还能超越规模更大的闭源模型。最后,我们通过消融实验验证了生成与重排序步骤对迷思概念生成质量的重要性。