Large Language Models (LLMs) have demonstrated potential in cybersecurity applications but have also caused lower confidence due to problems like hallucinations and a lack of truthfulness. Existing benchmarks provide general evaluations but do not sufficiently address the practical and applied aspects of LLM performance in cybersecurity-specific tasks. To address this gap, we introduce the SECURE (Security Extraction, Understanding \& Reasoning Evaluation), a benchmark designed to assess LLMs performance in realistic cybersecurity scenarios. SECURE includes six datasets focussed on the Industrial Control System sector to evaluate knowledge extraction, understanding, and reasoning based on industry-standard sources. Our study evaluates seven state-of-the-art models on these tasks, providing insights into their strengths and weaknesses in cybersecurity contexts, and offer recommendations for improving LLMs reliability as cyber advisory tools.
翻译:大型语言模型(LLMs)在网络安全应用中展现出潜力,但也因幻觉和缺乏真实性等问题导致可信度降低。现有基准测试提供了通用评估,但未能充分考量LLMs在网络安全特定任务中的实际应用表现。为填补这一空白,我们提出了SECURE(安全信息提取、理解与推理评估)基准,该基准旨在评估LLMs在真实网络安全场景中的性能。SECURE包含六个聚焦工业控制系统的数据集,基于行业标准资源评估知识提取、理解与推理能力。本研究在相关任务上评估了七个前沿模型,揭示了其在网络安全场景中的优势与不足,并为提升LLMs作为网络咨询工具的可靠性提出了改进建议。