SAST (Static Application Security Testing) tools are among the most widely used techniques in defensive cybersecurity, employed by commercial and non-commercial organizations to identify potential vulnerabilities in software. Despite their great utility, they generate numerous false positives, requiring costly manual filtering (aka triage). While LLM-powered agents show promise for automating cybersecurity tasks, existing benchmarks fail to emulate real-world SAST finding distributions. We introduce SastBench, a benchmark for evaluating SAST triage agents that combines real CVEs as true positives with filtered SAST tool findings as approximate false positives. SastBench features an agent-agnostic design. We evaluate different agents on the benchmark and present a comparative analysis of their performance, provide a detailed analysis of the dataset, and discuss the implications for future development.
翻译:SAST(静态应用程序安全测试)工具是防御性网络安全领域应用最广泛的技术之一,被商业和非商业组织用于识别软件中的潜在漏洞。尽管这些工具极具实用性,但会产生大量误报,需要耗费大量人力进行手动筛选(即分类)。虽然基于大语言模型的智能体在自动化网络安全任务方面展现出潜力,但现有基准无法模拟真实世界SAST发现结果的分布特征。本文提出SastBench——一个用于评估SAST分类智能体的基准,该基准将真实CVE作为真阳性样本,并将经过筛选的SAST工具检测结果作为近似假阳性样本。SastBench采用与智能体无关的设计架构。我们在该基准上评估了不同智能体的性能,提供了详尽的性能对比分析,对数据集进行了深入剖析,并探讨了对未来发展的启示。