Knowledge tracing (KT), aiming to mine students' mastery of knowledge by their exercise records and predict their performance on future test questions, is a critical task in educational assessment. While researchers achieved tremendous success with the rapid development of deep learning techniques, current knowledge tracing tasks fall into the cracks from real-world teaching scenarios. Relying heavily on extensive student data and solely predicting numerical performances differs from the settings where teachers assess students' knowledge state from limited practices and provide explanatory feedback. To fill this gap, we explore a new task formulation: Explainable Few-shot Knowledge Tracing. By leveraging the powerful reasoning and generation abilities of large language models (LLMs), we then propose a cognition-guided framework that can track the student knowledge from a few student records while providing natural language explanations. Experimental results from three widely used datasets show that LLMs can perform comparable or superior to competitive deep knowledge tracing methods. We also discuss potential directions and call for future improvements in relevant topics.
翻译:知识追踪旨在通过学生的练习记录挖掘其知识掌握程度,并预测其在未来测试题目上的表现,是教育评估中的关键任务。随着深度学习技术的快速发展,研究者已取得巨大成功,但当前知识追踪任务与实际教学场景存在脱节。现有方法严重依赖大量学生数据且仅预测数值化表现,这与教师通过有限练习评估学生知识状态并提供解释性反馈的实际情境不符。为填补这一空白,我们探索了一种新的任务形式:可解释的少样本知识追踪。通过利用大语言模型强大的推理与生成能力,我们提出了一种认知引导框架,该框架能够基于少量学生记录追踪知识状态,同时提供自然语言解释。在三个广泛使用的数据集上的实验结果表明,大语言模型的表现可与先进的深度知识追踪方法相媲美甚至更优。我们还讨论了潜在研究方向,并呼吁未来在相关主题上作出改进。