The emergence of pre-trained model-based vulnerability detection methods has significantly advanced the field of automated vulnerability detection. However, these methods still face several challenges, such as difficulty in learning effective feature representations of statements for fine-grained predictions and struggling to process overly long code sequences. To address these issues, this study introduces StagedVulBERT, a novel vulnerability detection framework that leverages a pre-trained code language model and employs a coarse-to-fine strategy. The key innovation and contribution of our research lies in the development of the CodeBERT-HLS component within our framework, specialized in hierarchical, layered, and semantic encoding. This component is designed to capture semantics at both the token and statement levels simultaneously, which is crucial for achieving more accurate multi-granular vulnerability detection. Additionally, CodeBERT-HLS efficiently processes longer code token sequences, making it more suited to real-world vulnerability detection. Comprehensive experiments demonstrate that our method enhances the performance of vulnerability detection at both coarse- and fine-grained levels. Specifically, in coarse-grained vulnerability detection, StagedVulBERT achieves an F1 score of 92.26%, marking a 6.58% improvement over the best-performing methods. At the fine-grained level, our method achieves a Top-5% accuracy of 65.69%, which outperforms the state-of-the-art methods by up to 75.17%.
翻译:基于预训练模型的漏洞检测方法的出现,极大地推动了自动化漏洞检测领域的发展。然而,这些方法仍面临若干挑战,例如难以学习用于细粒度预测的语句有效特征表示,以及难以处理过长的代码序列。为解决这些问题,本研究引入了StagedVulBERT,一个利用预训练代码语言模型并采用由粗到精策略的新型漏洞检测框架。我们研究的关键创新与贡献在于框架内开发的CodeBERT-HLS组件,该组件专门用于分层、分层和语义编码。该组件旨在同时捕获令牌级别和语句级别的语义,这对于实现更精确的多粒度漏洞检测至关重要。此外,CodeBERT-HLS能够高效处理更长的代码令牌序列,使其更适用于现实世界的漏洞检测。全面的实验表明,我们的方法在粗粒度和细粒度级别上都提升了漏洞检测的性能。具体而言,在粗粒度漏洞检测中,StagedVulBERT取得了92.26%的F1分数,比性能最佳的方法提高了6.58%。在细粒度级别,我们的方法达到了65.69%的Top-5%准确率,比现有最优方法高出最多75.17%。