The rapid expansion of online courses and social media has generated large volumes of unstructured learner-generated text. Understanding how learners construct knowledge in these spaces is crucial for analysing learning processes, informing content design, and providing feedback at scale. However, existing approaches typically rely on manual coding of well-structured discussion forums, which does not scale to the fragmented discourse found in online learning. This study proposes and validates a framework that combines a codebook inspired by the Interaction Analysis Model with an automated classifier to enable large-scale analysis of knowledge construction in unstructured online discourse. We adapt four comment-level categories of knowledge construction: Non-Knowledge Construction, Share, Explore, and Integrate. Three trained annotators coded a balanced sample of 20,000 comments from YouTube education channels. The codebook demonstrated strong reliability, with Cohen's kappa = 0.79 on the main dataset and 0.85--0.93 across four additional educational domains. For automated classification, bag-of-words baselines were compared with transformer-based language models using 10-fold cross-validation. A DeBERTa-v3-large model achieved the highest macro-averaged F1 score (0.841), outperforming all baselines and other transformer models. External validation on four domains yielded macro-F1 above 0.705, with stronger transfer in medicine and programming, where discourse was more structured and task-focused, and weaker transfer in language and music, where comments were more varied and context-dependent. Overall, the study shows that theory-driven, semi-automated analysis of knowledge construction at scale is feasible, enabling the integration of knowledge-construction indicators into learning analytics and the design of online learning environments.
翻译:在线课程和社交媒体的迅速扩张产生了大量无结构化的学习者生成文本。理解学习者如何在这些空间中建构知识,对于分析学习过程、指导内容设计以及大规模提供反馈至关重要。然而,现有方法通常依赖于对结构良好的讨论论坛进行人工编码,这无法扩展到在线学习中常见的碎片化话语。本研究提出并验证了一个框架,该框架结合了受交互分析模型启发的编码手册与自动化分类器,以实现对无结构化在线话语中知识建构的大规模分析。我们调整了知识建构的四个评论级别类别:非知识建构、分享、探索和整合。三位训练有素的标注员对来自YouTube教育频道的20,000条平衡样本评论进行了编码。编码手册表现出很强的可靠性,在主数据集上Cohen's kappa = 0.79,在四个额外教育领域中为0.85–0.93。对于自动化分类,使用10折交叉验证比较了词袋基线模型与基于Transformer的语言模型。DeBERTa-v3-large模型取得了最高的宏平均F1分数(0.841),优于所有基线模型和其他Transformer模型。在四个领域的外部验证中,宏F1分数均高于0.705,其中在医学和编程领域(话语更结构化且任务导向)表现出更强的迁移性,而在语言和音乐领域(评论更多样化且依赖上下文)迁移性较弱。总体而言,研究表明,理论驱动、半自动化的大规模知识建构分析是可行的,这使得将知识建构指标整合到学习分析以及在线学习环境的设计中成为可能。