As generative AI becomes embedded in higher education, it increasingly shapes how students complete academic tasks. While these systems offer efficiency and support, concerns persist regarding over-automation, diminished student agency, and the potential for unreliable or hallucinated outputs. This study conducts a mixed-methods audit of student-AI collaboration preferences by examining the alignment between current AI capabilities and students' desired levels of automation in academic work. Using two sequential and complementary surveys, we capture students' perceived benefits, risks, and preferred boundaries when using AI. The first survey employs an existing task-based framework to assess preferences for and actual usage of AI across 12 academic tasks, alongside primary concerns and reasons for use. The second survey, informed by the first, explores how AI systems could be designed to address these concerns through open-ended questions. This study aims to identify gaps between existing AI affordances and students' normative expectations of collaboration, informing the development of more effective and trustworthy AI systems for education.
翻译:随着生成式人工智能在高等教育中的深入应用,它日益影响着学生完成学术任务的方式。尽管这些系统提供了效率与支持,但人们持续担忧过度自动化、学生能动性减弱以及可能产生不可靠或虚构输出等问题。本研究通过考察当前AI能力与学生期望的学术工作自动化水平之间的匹配度,对学生-AI协作偏好进行了混合方法审计。我们通过两项连续且互补的调查,收集了学生在使用AI时感知到的益处、风险及偏好的协作边界。第一项调查采用现有的任务框架,评估了学生在12项学术任务中对AI的偏好与实际使用情况,同时收集了主要关切点及使用原因。第二项调查基于第一项调查的结果,通过开放式问题探讨了如何设计AI系统以应对这些关切。本研究旨在识别现有AI功能与学生规范性协作期望之间的差距,为开发更有效、更可信的教育用AI系统提供参考。