The increasing complexity of software systems and the sophistication of cyber-attacks have underscored the need for reliable automated software vulnerability detection. Data-driven approaches using deep learning models show promise but critically depend on the availability of large, accurately labeled datasets. Yet existing datasets either suffer from noisy labels, limited vulnerability coverage, or fail to reflect vulnerabilities as they occur in real-world software. This also limits large-scale benchmarking of such solutions. Automated vulnerability injection provides a way to address these limitations, but existing techniques remain limited in coverage, contextual fidelity, or injection success. In this paper, we present AVIATOR, the first AI-agentic vulnerability injection framework. AVIATOR decomposes vulnerability injection into a coordinated workflow of specialized AI agents, tool-based analysis, and iterative self-correction, explicitly mirroring expert reasoning. It integrates RAG and lightweight LoRA-based fine-tuning to produce realistic, category-specific vulnerabilities without relying on handcrafted patterns. Across three benchmarks, AVIATOR achieves high injection fidelity (91-95%) surpassing existing injection techniques in both accuracy and vulnerability coverage. When used for data augmentation to train deep learning-based vulnerability detection (DLVD) models, AVIATOR provides the strongest downstream gains in vulnerability detection. Across models and base datasets, AVIATOR improves average F1 scores by +22% over no augmentation, +25% over VGX, holding the prior best injection success rate, and +3% over VulScribeR, the prior state-of-the-art LLM-based injection model, with +7% higher recall and no precision loss. Its augmented data exhibits the lowest distributional distortion and scales efficiently with <2% syntax rejection at 4.3x lower cost than VulScribeR.
翻译:随着软件系统日益复杂和网络攻击日益精密,对可靠的自动化软件漏洞检测的需求日益凸显。基于深度学习模型的数据驱动方法展现出潜力,但其关键依赖于大规模、准确标注数据集的可用性。然而,现有数据集要么存在标签噪声,要么漏洞覆盖范围有限,要么无法反映真实世界软件中实际出现的漏洞。这也限制了此类解决方案的大规模基准测试。自动化漏洞注入为解决这些局限性提供了一种途径,但现有技术在覆盖范围、上下文保真度或注入成功率方面仍然有限。本文提出了AVIATOR,首个AI代理式漏洞注入框架。AVIATOR将漏洞注入分解为由专用AI代理、基于工具的分析和迭代自我修正组成的协同工作流,明确地模拟了专家推理过程。它集成了RAG和基于轻量级LoRA的微调技术,无需依赖人工构建的模式即可生成真实、特定类别的漏洞。在三个基准测试中,AVIATOR实现了高注入保真度(91-95%),在准确性和漏洞覆盖范围方面均超越了现有注入技术。当用于数据增强以训练基于深度学习的漏洞检测(DLVD)模型时,AVIATOR在漏洞检测的下游任务中提供了最强的性能提升。在不同模型和基础数据集上,与无增强相比,AVIATOR将平均F1分数提高了+22%;与先前保持最佳注入成功率的VGX相比,提高了+25%;与先前最先进的基于LLM的注入模型VulScribeR相比,提高了+3%,同时召回率高出+7%且无精度损失。其增强数据表现出最低的分布失真,并能高效扩展,语法拒绝率低于2%,成本比VulScribeR低4.3倍。