Recent advances in generative artificial intelligence (AI), such as ChatGPT, Google Gemini, and other large language models (LLMs), pose significant challenges for maintaining academic integrity within higher education. This paper examines the structural susceptibility of a certified M.Sc. Cyber Security program at a UK Russell Group university to the misuse of LLMs. Building on and extending a recently proposed quantitative framework for estimating assessment-level exposure, we analyse all summative assessments on the program and derive both module-level and program-level exposure metrics. Our results show that the majority of modules exhibit high exposure to LLM misuse, driven largely by independent project- and report-based assessments, with the capstone dissertation module particularly vulnerable. We introduce a credit-weighted program exposure score and find that the program as a whole falls within a high to very high risk band. We also discuss contextual factors -- such as block teaching and a predominantly international cohort -- that may amplify incentives to misuse LLMs. In response, we outline a set of LLM-resistant assessment strategies, critically assess the limitations of detection-based approaches, and argue for a pedagogy-first approach that preserves academic standards while preparing students for the realities of professional cyber security practice.
翻译:生成式人工智能(如ChatGPT、Google Gemini及其他大型语言模型)的最新进展,对维持高等教育的学术诚信构成了重大挑战。本文研究了一所英国罗素集团大学经认证的网络安全硕士课程在结构上对大型语言模型滥用的易感性。基于并扩展了最近提出的用于评估课程考核层面暴露风险的量化框架,我们分析了该课程的所有终结性考核,并得出了模块层面和课程层面的暴露指标。我们的结果表明,大多数模块因主要依赖独立的项目和报告型考核而表现出较高的LLM滥用暴露风险,其中顶点论文模块尤为脆弱。我们引入了一个学分加权的课程暴露总分,发现该课程整体处于高风险至极高风险的区间。我们还讨论了可能加剧滥用LLM动机的背景因素——例如模块化集中教学和以国际学生为主的生源构成。作为应对,我们概述了一系列抗LLM的考核策略,批判性评估了基于检测方法的局限性,并主张采取以教学法为先的路径,在维持学术标准的同时,使学生为网络安全专业实践的现实做好准备。