Context. The security of critical infrastructure has been a fundamental concern since the advent of computers, and this concern has only intensified in today's cyber warfare landscape. Protecting mission-critical systems (MCSs), including essential assets like healthcare, telecommunications, and military coordination, is vital for national security. These systems require prompt and comprehensive governance to ensure their resilience, yet recent events have shown that meeting these demands is increasingly challenging. Aim. Building on prior research that demonstrated the potential of GAI, particularly Large Language Models (LLMs), in improving risk analysis tasks, we aim to explore practitioners' perspectives, specifically developers and security personnel, on using generative AI (GAI) in the governance of IT MCSs seeking to provide insights and recommendations for various stakeholders, including researchers, practitioners, and policymakers. Method. We designed a survey to collect practical experiences, concerns, and expectations of practitioners who develop and implement security solutions in the context of MCSs. Analyzing this data will help identify key trends, challenges, and opportunities for introducing GAIs in this niche domain. Conclusions and Future Works. Our findings highlight that the safe use of LLMs in MCS governance requires interdisciplinary collaboration. Researchers should focus on designing regulation-oriented models and focus on accountability; practitioners emphasize data protection and transparency, while policymakers must establish a unified AI framework with global benchmarks to ensure ethical and secure LLMs-based MCS governance.
翻译:背景:自计算机问世以来,关键基础设施的安全性一直是基本关切,在当今的网络战背景下,这种关切只增不减。保护关键任务系统(包括医疗、电信和军事协调等重要资产)对国家安全至关重要。这些系统需要及时且全面的治理以确保其韧性,但近期事件表明,满足这些需求正变得越来越具有挑战性。目的:基于先前研究展示生成式人工智能(特别是大型语言模型)在改进风险分析任务方面的潜力,我们旨在探索从业者(特别是开发人员和安全人员)对在IT关键任务系统治理中使用生成式人工智能的看法,力求为研究人员、从业者和政策制定者等利益相关方提供见解和建议。方法:我们设计了一项调查,以收集在关键任务系统背景下开发和实施安全解决方案的从业者的实践经验、关切和期望。分析这些数据将有助于识别在这一特定领域引入生成式人工智能的关键趋势、挑战和机遇。结论与未来工作:我们的研究结果表明,在关键任务系统治理中安全使用大型语言模型需要跨学科合作。研究人员应专注于设计面向监管的模型并聚焦于问责制;从业者强调数据保护和透明度,而政策制定者必须建立具有全球基准的统一人工智能框架,以确保基于大型语言模型的关键任务系统治理符合伦理且安全。