Grading student assignments in STEM courses is a laborious and repetitive task for tutors, often requiring a week to assess an entire class. For students, this delay of feedback prevents iterating on incorrect solutions, hampers learning, and increases stress when exercise scores determine admission to the final exam. Recent advances in AI-assisted education, such as automated grading and tutoring systems, aim to address these challenges by providing immediate feedback and reducing grading workload. However, existing solutions often fall short due to privacy concerns, reliance on proprietary closed-source models, lack of support for combining Markdown, LaTeX and Python code, or excluding course tutors from the grading process. To overcome these limitations, we introduce PyEvalAI, an AI-assisted evaluation system, which automatically scores Jupyter notebooks using a combination of unit tests and a locally hosted language model to preserve privacy. Our approach is free, open-source, and ensures tutors maintain full control over the grading process. A case study demonstrates its effectiveness in improving feedback speed and grading efficiency for exercises in a university-level course on numerics.
翻译:在STEM课程中,对学生作业进行评分对教师而言是一项繁重且重复的任务,通常需要一周时间才能完成整个班级的评估。对学生而言,反馈的延迟阻碍了他们对错误解决方案的迭代改进,妨碍了学习效果,并在练习成绩决定最终考试资格时增加了他们的压力。人工智能辅助教育的最新进展,如自动评分和辅导系统,旨在通过提供即时反馈和减轻评分工作量来应对这些挑战。然而,现有解决方案往往存在不足,原因包括隐私顾虑、依赖专有闭源模型、缺乏对Markdown、LaTeX和Python代码混合使用的支持,或将课程教师排除在评分过程之外。为克服这些限制,我们推出了PyEvalAI——一种人工智能辅助评估系统。该系统结合单元测试与本地托管的大语言模型对Jupyter notebook进行自动评分,从而保护隐私。我们的方法免费、开源,并确保教师能完全掌控评分过程。一项案例研究表明,该系统在提升大学数值计算课程练习的反馈速度与评分效率方面具有显著成效。