Large Language Models (LLMs) & Generative AI are transforming cybersecurity, enabling both advanced defenses and new attacks. Organizations now use LLMs for threat detection, code review, and DevSecOps automation, while adversaries leverage them to produce malwares and run targeted social-engineering campaigns. This paper presents a unified analysis integrating offensive and defensive perspectives on GenAI-driven cybersecurity. Drawing on 70 academic, industry, and policy sources, it analyzes the rise of AI-facilitated threats and its implications for global security to ground necessity for scalable defensive mechanisms. We introduce two primary contributions: the LLM Scalability Risk Index (LSRI), a parametric framework to stress-test operational risks when deploying LLMs in security-critical environments & a model-supply-chain framework establishing a verifiable root of trust throughout model lifecycle. We also synthesize defense strategies from platforms like Google Play Protect, Microsoft Security Copilot and outline a governance roadmap for secure, large-scale LLM deployment.
翻译:大语言模型(LLMs)与生成式人工智能正在重塑网络安全格局,既推动了高级防御技术的发展,也催生了新型攻击手段。当前,组织已广泛采用大语言模型进行威胁检测、代码审查和DevSecOps自动化,而攻击者则利用其生成恶意软件并实施精准的社会工程学攻击。本文提出了一种整合生成式人工智能驱动网络安全的攻防视角的统一分析框架。通过综合70余项学术、行业及政策文献,本研究系统分析了人工智能助长型威胁的兴起及其对全球安全的影响,从而论证了构建可扩展防御机制的必要性。我们主要提出两项贡献:一是大语言模型可扩展性风险指数(LSRI),该参数化框架用于对安全关键环境中部署大语言模型时的操作风险进行压力测试;二是建立覆盖模型全生命周期的可验证信任根的模型供应链框架。此外,我们整合了来自Google Play Protect、Microsoft Security Copilot等平台的防御策略,并提出了保障大规模大语言模型安全部署的治理路线图。