Modern computing students often rely on both natural-language prompting and manual code editing to solve programming tasks. Yet we still lack a clear understanding of how these two modes are combined in practice, and how their usage varies with task complexity and student ability. In this paper, we investigate this through a large-scale study in an introductory programming course, collecting 13,305 interactions from 355 students during a three-day lab activity. Our analysis shows that students primarily use prompting to generate initial solutions, and then often enter short edit-run loops to refine their code following a failed execution. Student reflections confirm that prompting is helpful for structuring solutions, editing is effective for making targeted corrections, while both are useful for learning. We find that manual editing becomes more frequent as task complexity increases, but most edits remain concise, with many affecting a single line of code. Higher-performing students tend to succeed using prompting alone, while lower-performing students rely more on edits. These findings highlight the role of manual editing as a deliberate last-mile repair strategy, complementing prompting in AI-assisted programming workflows.
翻译:现代计算机专业学生常依赖自然语言提示和手动代码编辑两种方式解决编程任务。然而,我们仍不清楚这两种模式在实践中如何结合,以及其使用方式如何随任务复杂度和学生能力变化。本文通过在一门编程导论课程中进行大规模研究,收集了355名学生在为期三天的实验活动中产生的13,305次交互记录,对此展开探究。分析表明:学生主要使用提示生成初始解决方案,随后常在执行失败后进入简短的编辑-运行循环以优化代码。学生反思证实:提示有助于构建解决方案框架,编辑能有效进行针对性修正,而两者皆对学习有益。研究发现:随着任务复杂度提升,手动编辑频率增加,但多数编辑仍保持简洁,其中许多仅涉及单行代码修改。能力较强的学生倾向于仅通过提示即获成功,而能力较弱的学生则更依赖编辑操作。这些发现揭示了手动编辑作为一种有意识的"最后一公里"修复策略,在AI辅助编程工作流中对提示功能起到了补充作用。