Generative AI blurs the lines of authorship in computing education, creating uncertainty around how students should attribute AI assistance. To examine these emerging norms, we conducted a factorial vignette study with 94 computer science students across 102 unique scenarios, systematically manipulating assessment type, AI autonomy, student activity, prior knowledge, and human refinement effort. This paper details how these factors influence students' perceptions of ownership and disclosure preferences. Our findings indicate that attribution judgments are primarily driven by different levels of AI assistance and human refinement. We also found that students' perception of authorship significantly predicts their policy expectations. We conclude by proposing a shift from statement-style policies to process-oriented attribution, transforming disclosure into a pedagogical mechanism for fostering critical engagement with AI-generated content.
翻译:生成式人工智能模糊了计算教育中的作者身份界限,导致学生在如何归因人工智能辅助方面存在不确定性。为探究这些新兴规范,我们对94名计算机科学学生进行了因子情景研究,涵盖102个独特场景,系统性地操控了评估类型、人工智能自主性、学生活动、先验知识和人工精炼投入。本文详细阐述了这些因素如何影响学生对所有权和披露偏好的认知。研究发现,归因判断主要受人工智能辅助程度和人工精炼水平的影响。我们还发现,学生对作者身份的认知能显著预测其政策期望。最后,我们建议从声明式政策转向过程导向的归因机制,将披露转化为促进对人工智能生成内容进行批判性参与的教学工具。