We introduce SecCodeBench-V2, a publicly released benchmark for evaluating Large Language Model (LLM) copilots' capabilities of generating secure code. SecCodeBench-V2 comprises 98 generation and fix scenarios derived from Alibaba Group's industrial productions, where the underlying security issues span 22 common CWE (Common Weakness Enumeration) categories across five programming languages: Java, C, Python, Go, and JavaScript. SecCodeBench-V2 adopts a function-level task formulation: each scenario provides a complete project scaffold and requires the model to implement or patch a designated target function under fixed interfaces and dependencies. For each scenario, SecCodeBench-V2 provides executable proof-of-concept (PoC) test cases for both functional validation and security verification. All test cases are authored and double-reviewed by security experts, ensuring high fidelity, broad coverage, and reliable ground truth. Beyond the benchmark itself, we build a unified evaluation pipeline that assesses models primarily via dynamic execution. For most scenarios, we compile and run model-generated artifacts in isolated environments and execute PoC test cases to validate both functional correctness and security properties. For scenarios where security issues cannot be adjudicated with deterministic test cases, we additionally employ an LLM-as-a-judge oracle. To summarize performance across heterogeneous scenarios and difficulty levels, we design a Pass@K-based scoring protocol with principled aggregation over scenarios and severity, enabling holistic and comparable evaluation across models. Overall, SecCodeBench-V2 provides a rigorous and reproducible foundation for assessing the security posture of AI coding assistants, with results and artifacts released at https://alibaba.github.io/sec-code-bench. The benchmark is publicly available at https://github.com/alibaba/sec-code-bench.
翻译:我们推出 SecCodeBench-V2,这是一个公开发布的基准测试,用于评估大型语言模型(LLM)辅助编程工具生成安全代码的能力。SecCodeBench-V2 包含 98 个生成与修复场景,这些场景源自阿里巴巴集团的工业级生产实践,其底层安全问题涵盖 22 个常见 CWE(通用缺陷枚举)类别,涉及五种编程语言:Java、C、Python、Go 和 JavaScript。SecCodeBench-V2 采用函数级任务形式:每个场景提供一个完整的项目脚手架,要求模型在固定的接口和依赖关系下实现或修补指定的目标函数。对于每个场景,SecCodeBench-V2 均提供可执行的概念验证(PoC)测试用例,用于功能验证与安全验证。所有测试用例均由安全专家编写并经过双重评审,确保了高保真度、广泛覆盖和可靠的基准事实。除了基准本身,我们还构建了一个统一的评估流水线,主要通过动态执行来评估模型。对于大多数场景,我们在隔离环境中编译并运行模型生成的代码产物,并执行 PoC 测试用例以验证功能正确性和安全属性。对于无法通过确定性测试用例判定安全问题的场景,我们额外采用 LLM 作为评判器。为了总结跨异构场景和难度级别的性能表现,我们设计了一种基于 Pass@K 的评分协议,对场景和严重程度进行有原则的聚合,从而实现跨模型的全面且可比较的评估。总体而言,SecCodeBench-V2 为评估 AI 编程助手的安全态势提供了一个严谨且可复现的基础,相关结果与代码产物发布于 https://alibaba.github.io/sec-code-bench。该基准测试公开发布于 https://github.com/alibaba/sec-code-bench。