The increasing frequency and sophistication of cybersecurity vulnerabilities in software systems underscore the urgent need for robust and effective methods of vulnerability assessment. However, existing approaches often rely on highly technical and abstract frameworks, which hinders understanding and increases the likelihood of exploitation, resulting in severe cyberattacks. Given the growing adoption of Large Language Models (LLMs) across diverse domains, this paper explores their potential application in cybersecurity, specifically for enhancing the assessment of software vulnerabilities. We propose ChatNVD, an LLM-based cybersecurity vulnerability assessment tool leveraging the National Vulnerability Database (NVD) to provide context-rich insights and streamline vulnerability analysis for cybersecurity professionals, developers, and non-technical users. We develop three variants of ChatNVD, utilizing three prominent LLMs: GPT-4o mini by OpenAI, Llama 3 by Meta, and Gemini 1.5 Pro by Google. To evaluate their efficacy, we conduct a comparative analysis of these models using a comprehensive questionnaire comprising common security vulnerability questions, assessing their accuracy in identifying and analyzing software vulnerabilities. This study provides valuable insights into the potential of LLMs to address critical challenges in understanding and mitigation of software vulnerabilities.
翻译:软件系统中网络安全漏洞日益频繁且复杂,突显了对稳健有效漏洞评估方法的迫切需求。然而,现有方法通常依赖高度技术性和抽象的框架,这阻碍了理解并增加了被利用的可能性,从而导致严重的网络攻击。鉴于大型语言模型(LLMs)在各领域的日益普及,本文探讨了其在网络安全领域的潜在应用,特别是用于增强软件漏洞评估。我们提出了ChatNVD,一种基于LLM的网络安全漏洞评估工具,它利用美国国家漏洞数据库(NVD)为网络安全专业人员、开发人员和非技术用户提供上下文丰富的见解并简化漏洞分析。我们开发了ChatNVD的三个变体,分别利用了三种主流LLM:OpenAI的GPT-4o mini、Meta的Llama 3和Google的Gemini 1.5 Pro。为了评估其效能,我们使用一份包含常见安全漏洞问题的综合问卷对这些模型进行了比较分析,评估了它们在识别和分析软件漏洞方面的准确性。本研究为LLMs在应对理解和缓解软件漏洞关键挑战方面的潜力提供了有价值的见解。