Large Language Models (LLMs) have strong capabilities in code comprehension, but fine-tuning costs and semantic alignment issues limit their project-specific optimization; conversely, code models such CodeBERT are easy to fine-tune, but it is often difficult to learn vulnerability semantics from complex code languages. To address these challenges, this paper introduces the Multi-Model Collaborative Vulnerability Detection approach (M2CVD) that leverages the strong capability of analyzing vulnerability semantics from LLMs to improve the detection accuracy of code models. M2CVD employs a novel collaborative process: first enhancing the quality of vulnerability semantic description produced by LLMs through the understanding of project code by code models, and then using these improved vulnerability semantic description to boost the detection accuracy of code models. We demonstrated M2CVD's effectiveness on two real-world datasets, where M2CVD significantly outperformed the baseline. In addition, we demonstrate that the M2CVD collaborative method can extend to other different LLMs and code models to improve their accuracy in vulnerability detection tasks.
翻译:大型语言模型(LLM)在代码理解方面具备强大能力,但其微调成本与语义对齐问题限制了其在项目特定场景下的优化;相反,CodeBERT等代码模型易于微调,却往往难以从复杂的代码语言中学习漏洞语义。为应对这些挑战,本文提出多模型协作漏洞检测方法(M2CVD),该方法利用LLM分析漏洞语义的强能力来提升代码模型的检测精度。M2CVD采用一种新颖的协作流程:首先通过代码模型对项目代码的理解来增强LLM生成的漏洞语义描述质量,随后利用这些改进后的漏洞语义描述来提升代码模型的检测准确率。我们在两个真实数据集上验证了M2CVD的有效性,其性能显著优于基线方法。此外,我们证明了M2CVD协作机制可扩展至其他不同的LLM与代码模型,从而提升它们在漏洞检测任务中的准确性。