Background: Automated Vulnerability Repair (AVR) is a fast-growing branch of program repair. Recent studies show that large language models (LLMs) outperform traditional techniques, extending their success beyond code generation and fault detection. Hypothesis: These gains may be driven by hidden factors -- "invisible hands" such as training-data leakage or perfect fault localization -- that let an LLM reproduce human-authored fixes for the same code. Objective: We replicate prior AVR studies under controlled conditions by deliberately adding errors to the reported vulnerability location in the prompt. If LLMs merely regurgitate memorized fixes, both small and large localization errors should yield the same number of correct patches, because any offset should divert the model from the original fix. Method: Our pipeline repairs vulnerabilities from the Vul4J and VJTrans benchmarks after shifting the fault location by n lines from the ground truth. A first LLM generates a patch, a second LLM reviews it, and we validate the result with regression and proof-of-vulnerability tests. Finally, we manually audit a sample of patches and estimate the error rate with the Agresti-Coull-Wilson method.
翻译:背景:自动化漏洞修复是程序修复领域中快速发展的分支。近期研究表明,大语言模型在漏洞修复任务上超越了传统技术,将其成功从代码生成和缺陷检测领域进一步扩展。假设:这些性能提升可能受到隐性因素驱动——例如训练数据泄露或完美缺陷定位等“隐形之手”——使得大语言模型能够针对相同代码复现人工编写的修复方案。目标:我们在受控条件下复现了先前的自动化漏洞修复研究,通过在提示信息中刻意添加漏洞定位误差。若大语言模型仅机械复现记忆中的修复方案,则无论定位误差大小都应产生相同数量的正确补丁,因为任何定位偏移都应使模型偏离原始修复方案。方法:我们的修复流程在将漏洞位置从基准真值偏移n行后,对Vul4J和VJTrans基准测试集中的漏洞进行修复。首个大语言模型生成补丁,第二个大语言模型进行审查,随后通过回归测试和漏洞验证测试验证结果。最后,我们人工审计补丁样本,并采用Agresti-Coull-Wilson方法估算错误率。