Large Language Models (LLMs) are increasingly integrated into the software engineering ecosystem. Their test-time compute (TTC) reasoning capabilities show significant potential for understanding program logic and semantics beyond mere token recognition. However, current benchmarks for code reasoning lack a formal, program-centric deductive framework to ensure sound evaluation, and are incapable of assessing whether models genuinely reason about program semantics or merely exploit superficial associations between natural language and code tokens. To bridge this gap, we introduce TF-Bench, a benchmark designed to evaluate LLM reasoning based on type inference in System F, a task we refer to as program semantics reasoning. By employing verified transformations to remove semantically irrelevant natural language, we construct TF-Bench_pure, a purely semantics-driven variant of TF-Bench. Our analysis reveals substantial limitations in state-of-the-art LLMs, with the best-performing LLM (Claude-3.7-sonnet) achieving only 55.85% accuracy on TF-Bench_pure. Additionally, we propose two novel metrics to assess robustness and the effectiveness of test-time reasoning, underscoring critical limitations in current LLM capabilities and highlighting essential directions for future research.
翻译:大型语言模型正日益融入软件工程生态系统。其测试时计算推理能力在理解程序逻辑与语义方面展现出超越单纯词元识别的巨大潜力。然而,当前代码推理基准测试缺乏形式化的、以程序为中心的演绎框架来确保评估的严谨性,且无法判断模型是否真正进行程序语义推理,还是仅仅利用了自然语言与代码词元间的表面关联。为弥补这一空白,我们提出了TF-Bench基准测试,旨在基于System F类型推断(我们称之为程序语义推理的任务)来评估LLM的推理能力。通过采用经过验证的转换方法去除语义无关的自然语言,我们构建了TF-Bench_pure——TF-Bench的纯语义驱动变体。我们的分析揭示了当前最先进LLM存在显著局限:性能最佳的LLM在TF-Bench_pure上仅达到55.85%的准确率。此外,我们提出了两个新颖的指标来评估鲁棒性和测试时推理的有效性,这些发现凸显了当前LLM能力的关键局限,并指明了未来研究的重要方向。