Large Language Models (LLMs) have demonstrated potential in cybersecurity applications but have also caused lower confidence due to problems like hallucinations and a lack of truthfulness. Existing benchmarks provide general evaluations but do not sufficiently address the practical and applied aspects of LLM performance in cybersecurity-specific tasks. To address this gap, we introduce the SECURE (Security Extraction, Understanding \& Reasoning Evaluation), a benchmark designed to assess LLMs performance in realistic cybersecurity scenarios. SECURE includes six datasets focussed on the Industrial Control System sector to evaluate knowledge extraction, understanding, and reasoning based on industry-standard sources. Our study evaluates seven state-of-the-art models on these tasks, providing insights into their strengths and weaknesses in cybersecurity contexts, and offer recommendations for improving LLMs reliability as cyber advisory tools.
翻译:大语言模型(LLMs)在网络安全应用中展现出潜力,但也因幻觉和缺乏真实性等问题导致可信度降低。现有基准测试提供了通用评估,但未能充分解决LLMs在网络安全特定任务中性能的实际应用层面问题。为弥补这一不足,我们提出了SECURE(安全信息提取、理解与推理评估)基准,旨在评估LLMs在真实网络安全场景中的表现。SECURE包含六个专注于工业控制系统的数据集,基于行业标准源评估知识提取、理解和推理能力。本研究评估了七种前沿模型在这些任务上的表现,揭示了它们在网络安全背景下的优势与不足,并为提升LLMs作为网络咨询工具的可靠性提出了改进建议。