As generative AI becomes embedded in higher education, it increasingly shapes how students complete academic tasks. While these systems offer efficiency and support, concerns persist regarding over-automation, diminished student agency, and the potential for unreliable or hallucinated outputs. This study conducts a mixed-methods audit of student-AI collaboration preferences by examining the alignment between current AI capabilities and students' desired levels of automation in academic work. Using two sequential and complementary surveys, we capture students' perceived benefits, risks, and preferred boundaries when using AI. The first survey employs an existing task-based framework to assess preferences for and actual usage of AI across 12 academic tasks, alongside primary concerns and reasons for use. The second survey, informed by the first, explores how AI systems could be designed to address these concerns through open-ended questions. This study aims to identify gaps between existing AI affordances and students' normative expectations of collaboration, informing the development of more effective and trustworthy AI systems for education.
翻译:随着生成式AI在高等教育中的深入应用,它日益影响着学生完成学术任务的方式。尽管这些系统提供了效率和支持,但关于过度自动化、学生能动性减弱以及可能产生不可靠或幻觉输出的担忧依然存在。本研究采用混合方法对学生与AI协作的偏好进行审计,通过考察当前AI能力与学生期望的学术工作自动化水平之间的匹配度来实现。通过两项顺序且互补的调查,我们捕捉了学生在使用AI时感知到的益处、风险及偏好的边界。第一项调查采用现有的基于任务的框架,评估了学生在12项学术任务中对AI的偏好与实际使用情况,同时收集了主要关切点和使用原因。第二项调查基于第一项调查的结果,通过开放式问题探讨了如何设计AI系统以应对这些关切。本研究旨在识别现有AI功能与学生规范性协作期望之间的差距,为开发更有效、更可信的教育AI系统提供参考。