Large language models (LLMs) have demonstrated significant potential in various tasks, including those requiring human-level intelligence, such as vulnerability detection. However, recent efforts to use LLMs for vulnerability detection remain preliminary, as they lack a deep understanding of whether a subject LLM's vulnerability reasoning capability stems from the model itself or from external aids such as knowledge retrieval and tooling support. In this paper, we aim to decouple LLMs' vulnerability reasoning from other capabilities, such as vulnerability knowledge adoption, context information retrieval, and advanced prompt schemes. We introduce LLM4Vuln, a unified evaluation framework that separates and assesses LLMs' vulnerability reasoning capabilities and examines improvements when combined with other enhancements. We conduct controlled experiments using 147 ground-truth vulnerabilities and 147 non-vulnerable cases in Solidity, Java and C/C++, testing them in a total of 3,528 scenarios across four LLMs (GPT-3.5, GPT-4, Phi-3, and Llama 3). Our findings reveal the varying impacts of knowledge enhancement, context supplementation, and prompt schemes. We also identify 14 zero-day vulnerabilities in four pilot bug bounty programs, resulting in $3,576 in bounties.
翻译:大语言模型(LLMs)已在多种任务中展现出巨大潜力,包括那些需要人类智能水平的任务,例如漏洞检测。然而,近期利用LLMs进行漏洞检测的研究仍处于初步阶段,因为它们缺乏对目标LLM的漏洞推理能力究竟是源于模型本身,还是源于外部辅助(如知识检索和工具支持)的深入理解。在本文中,我们旨在将LLMs的漏洞推理能力与其他能力(如漏洞知识采纳、上下文信息检索和高级提示方案)解耦。我们提出了LLM4Vuln,一个统一的评估框架,用于分离和评估LLMs的漏洞推理能力,并检验其与其他增强手段结合时的改进效果。我们使用Solidity、Java和C/C++中的147个真实漏洞和147个非漏洞案例进行了对照实验,在四个LLM(GPT-3.5、GPT-4、Phi-3和Llama 3)上总计测试了3,528个场景。我们的研究结果揭示了知识增强、上下文补充和提示方案的不同影响。我们还发现了四个试点漏洞赏金项目中的14个零日漏洞,并因此获得了3,576美元的赏金。