This study examines the impact of data snooping on neural networks for vulnerability detection in lifted code, building on previous research which used word2vec, and unidirectional and bidirectional transformer-based embeddings. The research specifically focuses on how model performance is affected when embedding models are trained on datasets, including samples also used for neural network training and validation. The results show that introducing data snooping did not significantly alter model performance, suggesting that data snooping had a minimal impact or that samples randomly dropped as part of the methodology contained hidden features critical to achieving optimal performance. In addition, the findings reinforce the conclusions of previous research, which found that models trained with GPT-2 embeddings consistently outperformed neural networks trained with other embeddings. The fact that this holds even when data snooping is introduced into the embedding model indicates GPT-2's robustness in representing complex code features, even under less-than-ideal conditions.
翻译:本研究基于先前使用word2vec、单向及双向Transformer嵌入模型的研究,探讨了数据窥探在提升代码漏洞检测神经网络中的影响。研究重点关注当嵌入模型在包含神经网络训练与验证所用样本的数据集上进行训练时,模型性能受到的影响。结果表明,引入数据窥探并未显著改变模型性能,这意味着数据窥探的影响微乎其微,或作为方法组成部分被随机剔除的样本包含了实现最优性能的关键隐藏特征。此外,本研究进一步证实了先前研究的结论,即采用GPT-2嵌入训练的模型始终优于使用其他嵌入训练的神经网络。即使在嵌入模型中引入数据窥探的情况下,这一结论依然成立,表明GPT-2在表征复杂代码特征方面具有稳健性,即使在非理想条件下也能保持优异性能。