The rapid advancement of Large Language Models (LLMs) presents both challenges and opportunities for Natural Language Processing (NLP) education. This paper introduces ``Vibe Coding,'' a pedagogical approach that leverages LLMs as coding assistants while maintaining focus on conceptual understanding and critical thinking. We describe the implementation of this approach in a senior-level undergraduate NLP course, where students completed seven labs using LLMs for code generation while being assessed primarily on conceptual understanding through critical reflection questions. Analysis of end-of-course feedback from 19 students reveals high satisfaction (mean scores 4.4-4.6/5.0) across engagement, conceptual learning, and assessment fairness. Students particularly valued the reduced cognitive load from debugging, enabling deeper focus on NLP concepts. However, challenges emerged around time constraints, LLM output verification, and the need for clearer task specifications. Our findings suggest that when properly structured with mandatory prompt logging and reflection-based assessment, LLM-assisted learning can shift focus from syntactic fluency to conceptual mastery, preparing students for an AI-augmented professional landscape.
翻译:大型语言模型(LLM)的快速发展为自然语言处理(NLP)教育带来了挑战与机遇。本文提出"氛围编程"教学法,该方法利用LLM作为编程助手,同时保持对概念理解和批判性思维的关注。我们在一门高年级本科NLP课程中实施了该方法,学生使用LLM完成七个实验的代码生成,同时通过批判性反思问题主要评估其概念理解能力。对19名学生课程反馈的分析显示,在参与度、概念学习和评估公平性方面均获得高度认可(平均分4.4-4.6/5.0)。学生特别赞赏该方法通过减少调试认知负荷,使其能更专注于NLP概念。然而,该方法在时间限制、LLM输出验证和任务规范明确性方面仍存在挑战。我们的研究结果表明,当通过强制提示词记录和基于反思的评估进行适当结构化时,LLM辅助学习能够将重点从语法熟练度转向概念掌握,为学生适应人工智能增强的专业环境做好准备。