This paper introduces SecRepoBench, a benchmark to evaluate code agents on secure code completion in real-world repositories. SecRepoBench has 318 code completion tasks in 27 C/C++ repositories, covering 15 CWEs. We evaluate 29 standalone LLMs and 15 code agents across 3 state-of-the-art agent frameworks using our benchmark. We find that state-of-the-art LLMs struggle with generating correct and secure code completions. However, code agents significantly outperform standalone LLMs. We show that SecRepoBench is more difficult than the prior state-of-the-art benchmark. Finally, our comprehensive analysis provides insights into potential directions for enhancing the ability of code agents to write correct and secure code in real-world repositories.
翻译:本文介绍SecRepoBench,这是一个用于评估代码智能体在真实世界代码库中安全代码补全能力的基准。SecRepoBench包含27个C/C++代码库中的318项代码补全任务,涵盖15种常见缺陷枚举(CWE)。我们使用该基准评估了29个独立大语言模型和15个基于三种前沿智能体框架构建的代码智能体。研究发现,当前最先进的大语言模型在生成正确且安全的代码补全方面仍面临困难,而代码智能体的表现显著优于独立大语言模型。我们证明SecRepoBench比现有前沿基准更具挑战性。最后,通过全面分析,本研究为提升代码智能体在真实世界代码库中编写正确且安全代码的能力提供了潜在改进方向的洞见。