Many developers rely on Large Language Models (LLMs) to facilitate software development. Nevertheless, these models have exhibited limited capabilities in the security domain. We introduce LLMSecGuard, an open-source framework that offers enhanced code security through the synergy between static code analyzers and LLMs. LLMSecGuard aims to equip practitioners with code solutions that are more secure than the code initially generated by LLMs. It also benchmarks LLMs, providing valuable insights into the evolving security properties of these models.
翻译:许多开发者依赖大型语言模型(LLMs)来促进软件开发。然而,这些模型在安全领域的能力有限。我们提出LLMSecGuard,一个通过静态代码分析器与LLMs协同作用来增强代码安全的开源框架。LLMSecGuard旨在为实践者提供比LLMs初始生成的代码更安全的代码解决方案。它还对LLMs进行基准测试,为这些模型不断演变的安全属性提供宝贵见解。