Effective personalized feedback is crucial for learning programming. However, providing personalized, real-time feedback in large programming classrooms poses significant challenges for instructors. This paper introduces SPHERE, an interactive system that leverages Large Language Models (LLMs) and structured LLM output review to scale personalized feedback for in-class coding activities. SPHERE employs two key components: an Issue Recommendation Component that identifies critical patterns in students' code and discussion, and a Feedback Review Component that uses a ``strategy-detail-verify'' approach for efficient feedback creation and verification. An in-lab, between-subject study demonstrates SPHERE's effectiveness in improving feedback quality and the overall feedback review process compared to a baseline system using off-the-shelf LLM outputs. This work contributes a novel approach to scaling personalized feedback in programming education, addressing the challenges of real-time response, issue prioritization, and large-scale personalization.
翻译:有效的个性化反馈对于编程学习至关重要。然而,在大规模编程课堂中提供个性化、实时的反馈对教师构成了重大挑战。本文介绍SPHERE,一个利用大型语言模型(LLMs)和结构化LLM输出审查的交互式系统,旨在为课堂编程活动规模化提供个性化反馈。SPHERE包含两个核心组件:一个用于识别学生代码与讨论中关键模式的“问题推荐组件”,以及一个采用“策略-细节-验证”方法进行高效反馈生成与核验的“反馈审查组件”。一项实验室内的组间对比研究表明,相较于直接使用现成LLM输出的基线系统,SPHERE在提升反馈质量与整体反馈审查流程方面具有显著效果。本工作为编程教育中个性化反馈的规模化提供了一种创新方法,有效应对了实时响应、问题优先级排序和大规模个性化带来的挑战。