This paper investigates supervised fine-tuning of large language models (LLMs) to improve their pedagogical alignment in computing education, addressing concerns that LLMs may hinder learning outcomes. The project utilised a proprietary dataset of 2,500 high quality question/answer pairs from programming course forums, and explores two research questions: the suitability of university course forums in contributing to fine-tuning datasets, and how supervised fine-tuning can improve LLMs' alignment with educational principles such as constructivism. Initial findings suggest benefits in pedagogical alignment of LLMs, with deeper evaluations required.
翻译:本文研究了通过监督微调提升大语言模型在计算教育中教学对齐性的方法,以应对LLMs可能阻碍学习效果的担忧。该项目利用了一个包含2500个高质量问答对的专有数据集,这些数据源自编程课程论坛,并探讨了两个研究问题:大学课程论坛对微调数据集构建的适用性,以及监督微调如何提升LLMs与建构主义等教育原则的对齐程度。初步研究表明该方法有助于改善LLMs的教学对齐性,但尚需更深入的评估验证。