This study investigates students' perceptions of Artificial Intelligence (AI) grading systems in an undergraduate computer science course (n = 27), focusing on a block-based programming final project. Guided by the ethical principles framework articulated by Jobin (2019), our study examines fairness, trust, consistency, and transparency in AI grading by comparing AI-generated feedback with original human-graded feedback. Findings reveal concerns about AI's lack of contextual understanding and personalization. We recommend that equitable and trustworthy AI systems reflect human judgment, flexibility, and empathy, serving as supplementary tools under human oversight. This work contributes to ethics-centered assessment practices by amplifying student voices and offering design principles for humanizing AI in designed learning environments.
翻译:本研究以一门本科计算机科学课程(样本量 n = 27)的模块化编程期末项目为背景,探究学生对人工智能评分系统的认知。依据Jobin(2019)提出的伦理原则框架,我们通过对比AI生成的反馈与原始人工评分反馈,系统考察了AI评分在公平性、信任度、一致性与透明度方面的表现。研究结果揭示了学生对AI缺乏情境理解与个性化能力的担忧。我们建议,公平且可信的AI系统应体现人类判断的灵活性及共情特质,并作为人类监督下的辅助工具。本研究通过凸显学生视角、为学习环境中的人本化AI设计提供原则,为伦理导向的评估实践作出了贡献。