This study investigates students' perceptions of Artificial Intelligence (AI) grading systems in an undergraduate computer science course (n = 27), focusing on a block-based programming final project. Guided by the ethical principles framework articulated by Jobin (2019), our study examines fairness, trust, consistency, and transparency in AI grading by comparing AI-generated feedback with original human-graded feedback. Findings reveal concerns about AI's lack of contextual understanding and personalization. We recommend that equitable and trustworthy AI systems reflect human judgment, flexibility, and empathy, serving as supplementary tools under human oversight. This work contributes to ethics-centered assessment practices by amplifying student voices and offering design principles for humanizing AI in designed learning environments.
翻译:本研究以某本科计算机科学课程(样本量 n = 27)的模块化编程期末项目为案例,探究学生对人工智能评分系统的认知。基于Jobin(2019)提出的伦理原则框架,我们通过对比AI生成反馈与原始人工评分反馈,系统考察AI评分在公平性、信任度、一致性及透明度四个维度的表现。研究发现,学生普遍担忧AI系统在情境理解与个性化反馈方面的局限性。我们主张,公平可信的AI评分系统应体现人类判断的灵活性及共情特质,并作为人类监督下的辅助工具。本研究通过凸显学生主体视角,为设计学习环境中的人本化AI系统提供设计原则,进而推动以伦理为核心的评估实践发展。