Educational data mining (EDM) is a part of applied computing that focuses on automatically analyzing data from learning contexts. Early prediction for identifying at-risk students is a crucial and widely researched topic in EDM research. It enables instructors to support at-risk students to stay on track, preventing student dropout or failure. Previous studies have predicted students' learning performance to identify at-risk students by using machine learning on data collected from e-learning platforms. However, most studies aimed to identify at-risk students utilizing the entire course data after the course finished. This does not correspond to the real-world scenario that at-risk students may drop out before the course ends. To address this problem, we introduce an RNN-Attention-KD (knowledge distillation) framework to predict at-risk students early throughout a course. It leverages the strengths of Recurrent Neural Networks (RNNs) in handling time-sequence data to predict students' performance at each time step and employs an attention mechanism to focus on relevant time steps for improved predictive accuracy. At the same time, KD is applied to compress the time steps to facilitate early prediction. In an empirical evaluation, RNN-Attention-KD outperforms traditional neural network models in terms of recall and F1-measure. For example, it obtained recall and F1-measure of 0.49 and 0.51 for Weeks 1--3 and 0.51 and 0.61 for Weeks 1--6 across all datasets from four years of a university course. Then, an ablation study investigated the contributions of different knowledge transfer methods (distillation objectives). We found that hint loss from the hidden layer of RNN and context vector loss from the attention module on RNN could enhance the model's prediction performance for identifying at-risk students. These results are relevant for EDM researchers employing deep learning models.
翻译:教育数据挖掘(EDM)作为应用计算领域的一个分支,专注于对学习情境中产生的数据进行自动分析。其中,针对学业困难学生的早期预测是EDM研究中至关重要且被广泛探讨的课题,它有助于教师及时为困难学生提供支持,防止其辍学或学业失败。先前研究大多基于从在线学习平台收集的数据,运用机器学习技术预测学生的学习表现以识别困难学生。然而,多数研究旨在利用课程结束后的完整数据来识别困难学生,这与现实情境中困难学生可能在课程结束前就已辍学的情况不符。为解决此问题,本文提出一种RNN-注意力-KD(知识蒸馏)框架,用于在整个课程期间早期预测困难学生。该框架利用循环神经网络(RNNs)在处理时序数据方面的优势来预测学生在每个时间步的表现,并采用注意力机制聚焦于相关时间步以提高预测准确性。同时,应用知识蒸馏技术压缩时间步以促进早期预测。实证评估表明,RNN-注意力-KD在召回率和F1值上均优于传统神经网络模型。例如,在为期四年的大学课程数据集上,该模型在第1-3周取得了0.49的召回率和0.51的F1值,在第1-6周取得了0.51的召回率和0.61的F1值。进一步的消融研究探讨了不同知识迁移方法(蒸馏目标)的贡献。我们发现,来自RNN隐藏层的提示损失和来自注意力模块上下文向量的损失能够提升模型识别困难学生的预测性能。这些发现对采用深度学习模型的EDM研究者具有参考价值。