Security vulnerabilities often arise unintentionally during development due to a lack of security expertise and code complexity. Traditional tools, such as static and dynamic analysis, detect vulnerabilities only after they are introduced in code, leading to costly remediation. This work explores a proactive strategy to prevent vulnerabilities by highlighting code regions that implement security-critical functionality -- such as data access, authentication, and input handling -- and providing guidance for their secure implementation. We present an IntelliJ IDEA plugin prototype that uses code-level software metrics to identify potentially security-critical methods and large language models (LLMs) to generate prevention-oriented explanations. Our initial evaluation on the Spring-PetClinic application shows that the selected metrics identify most known security-critical methods, while an LLM provides actionable, prevention-focused insights. Although these metrics capture structural properties rather than semantic aspects of security, this work lays the foundation for code-level security-aware metrics and enhanced explanations.
翻译:安全漏洞通常因开发过程中缺乏安全专业知识及代码复杂性而无意产生。传统工具(如静态与动态分析)仅在漏洞被引入代码后才能检测到,导致修复成本高昂。本研究探索一种主动预防漏洞的策略,通过高亮实现安全关键功能(如数据访问、身份验证和输入处理)的代码区域,并为其安全实现提供指导。我们提出一款IntelliJ IDEA插件原型,该原型利用代码级软件度量指标识别潜在的安全关键方法,并采用大语言模型(LLMs)生成面向预防的解释。我们在Spring-PetClinic应用上的初步评估表明,所选度量指标能识别出大多数已知的安全关键方法,而LLM可提供具有可操作性、以预防为核心的见解。尽管这些度量指标捕捉的是安全的结构特性而非语义层面,但本研究为代码级安全感知度量指标与增强型解释机制奠定了基础。