This paper proposes a pipeline for quantitatively evaluating interactive Large Language Models (LLMs) using publicly available datasets. We carry out an extensive technical evaluation of LLMs using Big-Vul covering four different common software vulnerability tasks. This evaluation assesses the multi-tasking capabilities of LLMs based on this dataset. We find that the existing state-of-the-art approaches and pre-trained Language Models (LMs) are generally superior to LLMs in software vulnerability detection. However, in software vulnerability assessment and location, certain LLMs (e.g., CodeLlama and WizardCoder) have demonstrated superior performance compared to pre-trained LMs, and providing more contextual information can enhance the vulnerability assessment capabilities of LLMs. Moreover, LLMs exhibit strong vulnerability description capabilities, but their tendency to produce excessive output significantly weakens their performance compared to pre-trained LMs. Overall, though LLMs perform well in some aspects, they still need improvement in understanding the subtle differences in code vulnerabilities and the ability to describe vulnerabilities to fully realize their potential. Our evaluation pipeline provides valuable insights into the capabilities of LLMs in handling software vulnerabilities.
翻译:本文提出了一种利用公开可用数据集对交互式大语言模型进行定量评估的流程。我们使用Big-Vul数据集,围绕四种常见的软件漏洞任务,对大语言模型进行了广泛的技术评估。该评估基于此数据集考察了大语言模型的多任务处理能力。我们发现,在软件漏洞检测方面,现有的最先进方法和预训练语言模型通常优于大语言模型。然而,在软件漏洞评估与定位任务中,某些大语言模型(例如CodeLlama和WizardCoder)表现出了优于预训练语言模型的性能,并且提供更多的上下文信息能够增强大语言模型的漏洞评估能力。此外,大语言模型展现出强大的漏洞描述能力,但其倾向于生成过多输出的特性,与预训练语言模型相比,显著削弱了其在此项任务上的表现。总体而言,尽管大语言模型在某些方面表现良好,但在理解代码漏洞的细微差别以及描述漏洞的能力方面仍需改进,以充分发挥其潜力。我们的评估流程为理解大语言模型在处理软件漏洞方面的能力提供了有价值的见解。