Knowledge Tracing (KT) aims to estimate a learner's evolving mastery based on interaction histories. Recent studies have explored Large Language Models (LLMs) for KT via autoregressive nature, but such approaches typically require fine-tuning and exhibit unstable or near-random performance. Moreover, prior KT systems primarily focus on prediction and rely on multi-stage pipelines for feedback and recommendation, resulting in increased system complexity and resources. To address this gap, we propose Thinking-KT, a training-free KT framework that incorporates Test-Time Scaling (TTS), enabling even small LLMs to achieve competitive KT performance. Moreover, in this framework, a small LLM can jointly perform KT prediction, personalized feedback generation, and learning recommendation in a unified output without degrading prediction accuracy. Beyond performance, we present the systematic analysis of reasoning traces in KT. Our results demonstrate that TTS is a critical yet underexplored factor in LLM-based KT, and that small LLMs can serve as unified ITS engines.
翻译:知识追踪旨在根据交互历史估计学习者不断演化的掌握程度。近期研究探索了利用大型语言模型的自回归特性进行知识追踪,但此类方法通常需要微调且表现出不稳定或接近随机的性能。此外,现有知识追踪系统主要侧重于预测,并依赖多阶段流水线实现反馈与推荐,导致系统复杂度和资源需求增加。为填补这一空白,我们提出Thinking-KT——一种融合测试时缩放的免训练知识追踪框架,即使小型大型语言模型也能实现具有竞争力的知识追踪性能。更重要的是,在此框架中,小型大型语言模型能够在保持预测准确性的前提下,通过统一输出联合执行知识追踪预测、个性化反馈生成与学习推荐。除性能表现外,我们还对知识追踪中的推理轨迹进行了系统分析。研究结果表明,测试时缩放是基于大型语言模型的知识追踪中关键但尚未充分探索的因素,且小型大型语言模型可作为统一的智能导学系统引擎。