LLMs demonstrate promising performance in software vulnerability detection after fine-tuning. However, it remains unclear whether these gains reflect a genuine understanding of vulnerability root causes or merely an exploitation of functional patterns. In this paper, we identify a critical failure mode termed the "semantic trap," where fine-tuned LLMs achieve high detection scores by associating certain functional domains with vulnerability likelihood rather than reasoning about the underlying security semantics.To systematically evaluate this phenomenon, we propose TrapEval, a comprehensive evaluation framework designed to disentangle vulnerability root cause from functional pattern. TrapEval introduces two complementary datasets derived from real-world open-source projects: V2N, which pairs vulnerable code with unrelated benign code, and V2P, which pairs vulnerable code with its corresponding patched version, forcing models to distinguish near-identical code that differs only in subtle security-critical logic. Using TrapEval, we fine-tune five representative state-of-the-art LLMs across three model families and evaluate them under cross-dataset testing, semantic-preserving perturbations, and varying degrees of semantic gap measured by CodeBLEU.Our empirical results reveal that, despite improvements in metrics, fine-tuned LLMs consistently struggle to distinguish vulnerable code from its patched counterpart, exhibit severe robustness degradation under minor semantic-preserving transformations, and rely heavily on functional-context shortcuts when the semantic gap is small. These findings provide strong evidence that current fine-tuning practices often fail to impart true vulnerability reasoning. Our findings serve as a wake-up call: high benchmark scores on traditional datasets may be illusory, masking the model's inability to understand the true causal logic of vulnerabilities.
翻译:经过微调后,大型语言模型(LLM)在软件漏洞检测方面展现出有前景的性能。然而,这些性能提升究竟反映了对漏洞根源的真正理解,还是仅仅利用了功能模式,目前尚不明确。在本文中,我们识别出一种关键失效模式,称为“语义陷阱”,即微调后的LLM通过将某些功能领域与漏洞可能性相关联,而非推理底层的安全语义,从而获得高检测分数。为了系统性地评估这一现象,我们提出了TrapEval,一个旨在将漏洞根源与功能模式分离的综合评估框架。TrapEval引入了两个源自真实世界开源项目的互补数据集:V2N(将易受攻击代码与无关的良性代码配对)和V2P(将易受攻击代码与其对应的已修复版本配对),迫使模型区分仅在细微安全关键逻辑上不同的近乎相同的代码。利用TrapEval,我们对来自三个模型家族的五个代表性最先进LLM进行微调,并在跨数据集测试、语义保持扰动以及由CodeBLEU度量的不同语义差距程度下进行评估。我们的实证结果表明,尽管指标有所改善,但微调后的LLM始终难以区分易受攻击代码与其修复后的对应版本,在轻微的语义保持变换下表现出严重的鲁棒性退化,并且在语义差距较小时严重依赖功能上下文捷径。这些发现提供了强有力的证据,表明当前的微调实践往往未能传授真正的漏洞推理能力。我们的发现敲响了警钟:在传统数据集上的高基准分数可能是虚幻的,掩盖了模型无法理解漏洞真正因果逻辑的事实。