Can LLM agents explore codebases and reason about code semantics without executing the code? We study this capability, which we call agentic code reasoning, and introduce semi-formal reasoning: a structured prompting methodology that requires agents to construct explicit premises, trace execution paths, and derive formal conclusions. Unlike unstructured chain-of-thought, semi-formal reasoning acts as a certificate: the agent cannot skip cases or make unsupported claims. We evaluate across three tasks (patch equivalence verification, fault localization, and code question answering) and show that semi-formal reasoning consistently improves accuracy on all of them. For patch equivalence, accuracy improves from 78% to 88% on curated examples and reaches 93% on real-world agent-generated patches, approaching the reliability needed for execution-free RL reward signals. For code question answering on RubberDuckBench Mohammad et al. (2026), semi-formal reasoning achieves 87% accuracy. For fault localization on Defects4J Just et al. (2014), semi-formal reasoning improves Top-5 accuracy by 5 percentage points over standard reasoning. These results demonstrate that structured agentic reasoning enables meaningful semantic code analysis without execution, opening practical applications in RL training pipelines, code review, and static program analysis.
翻译:大型语言模型(LLM)智能体能否在不执行代码的情况下探索代码库并推理代码语义?我们研究了这种能力,称之为智能代码推理,并引入半形式化推理:一种结构化提示方法,要求智能体构建显式前提、追踪执行路径并推导形式化结论。与非结构化的思维链不同,半形式化推理充当一种证明凭证:智能体无法跳过案例或提出无依据的断言。我们在三项任务(补丁等价性验证、故障定位和代码问答)上进行了评估,结果表明半形式化推理在所有任务上均持续提升准确率。在补丁等价性任务中,准确率在精选示例上从78%提升至88%,在真实场景中智能体生成的补丁上达到93%,接近无需执行的强化学习奖励信号所需的可靠性水平。在RubberDuckBench(Mohammad等人,2026)的代码问答任务中,半形式化推理实现了87%的准确率。在Defects4J(Just等人,2014)的故障定位任务中,半形式化推理将Top-5准确率较标准推理提升了5个百分点。这些结果表明,结构化的智能推理能够在不执行代码的情况下实现有意义的语义代码分析,为强化学习训练流程、代码审查和静态程序分析开辟了实际应用前景。