Neural surrogate models have emerged as powerful and efficient tools in data mining. Meanwhile, large language models (LLMs) have demonstrated remarkable capabilities in code-related tasks. We investigate a novel application: using LLMs as surrogate models for code execution prediction. Given LLMs' unique ability to understand and process diverse programs, they present a promising direction for building general-purpose surrogate models. To systematically investigate this capability, we introduce SURGE, a comprehensive benchmark with $1160$ problems covering $8$ key aspects: multi-language programming tasks, competition-level programming problems, repository-level code analysis, high-cost scientific computing, time-complexity-intensive algorithms, buggy code analysis, programs dependent on specific compilers or execution environments, and formal mathematical proof verification. Through extensive empirical analysis of $21$ open-source and proprietary LLMs, we examine scaling laws, data efficiency, and predictive accuracy. Our findings reveal important insights about the feasibility of LLMs as efficient surrogates for computational processes, with implications for automated software testing, program analysis, and computational resource optimization in data mining applications. Code and dataset are released at https://github.com/Imbernoulli/SURGE.
翻译:神经替代模型已成为数据挖掘领域强大而高效的工具。与此同时,大型语言模型(LLMs)在代码相关任务中展现出卓越能力。我们研究了一种新颖应用:使用LLMs作为代码执行预测的替代模型。鉴于LLMs理解和处理多样化程序的独特能力,它们为构建通用替代模型提供了有前景的方向。为系统研究这一能力,我们提出了SURGE——一个包含$1160$个问题的综合性基准测试集,涵盖$8$个关键维度:多语言编程任务、竞赛级编程问题、仓库级代码分析、高成本科学计算、时间复杂度密集型算法、缺陷代码分析、依赖特定编译器或执行环境的程序,以及形式化数学证明验证。通过对$21$个开源与专有LLMs的广泛实证分析,我们考察了缩放定律、数据效率和预测准确性。研究结果揭示了LLMs作为计算过程高效替代器的可行性,对自动化软件测试、程序分析及数据挖掘应用中的计算资源优化具有重要启示。代码与数据集发布于https://github.com/Imbernoulli/SURGE。