Prior research has raised concerns about students' over-reliance on large language models (LLMs) in higher education. This paper examines how Computer Science students and instructors engage with LLMs across five scenarios: "Writing", "Quiz", "Programming", "Project-based learning", and "Information retrieval". Through user studies with 16 students and 6 instructors, we identify 7 key intents, including increasingly complex student practices. Findings reveal varying levels of conflict between student practices and instructor norms, ranging from clear conflict in "Writing-generation" and "(Programming) quiz-solving", through partial conflict in "Programming project-implementation" and "Project-based learning", to broad agreement in "Writing-revision & ideation", "(Programming) quiz-correction" and "Info-query & summary". We document instructors are shifting from prohibiting to recognizing students' use of LLMs for high-quality work, integrating usage records into assessment grading. Finally, we propose LLM design guidelines: deploying default guardrails with game-like and empathetic interaction to prevent students from "deserting" LLMs, especially for "Writing-generation", while utilizing comprehension checks in low-conflict intents to promote learning.
翻译:先前的研究已对高等教育中学生过度依赖大型语言模型(LLMs)的现象提出担忧。本文通过"写作"、"测验"、"编程"、"项目式学习"和"信息检索"五种场景,考察计算机科学专业师生与LLMs的互动情况。通过对16名学生和6名教师开展用户研究,我们识别出包括日益复杂的学生实践在内的7类核心使用意图。研究发现:在"写作生成"与"(编程)测验解答"场景中学生实践与教师规范存在明显冲突;在"编程项目实现"和"项目式学习"中呈现部分冲突;而在"写作修订与构思"、"(编程)测验修正"及"信息查询与总结"场景中则达成广泛共识。研究证实教师正从禁止转向认可学生运用LLMs完成高质量作业,并将使用记录纳入评估体系。最后,我们提出LLM设计指南:针对"写作生成"等高冲突场景,应设置默认防护机制并采用游戏化共情交互以防止学生"弃用"LLMs;而在低冲突意图场景中,可通过理解核查机制促进学习成效。