The rise of Large Language Models (LLMs) as coding agents promises to accelerate software development, but their impact on generated code reproducibility remains largely unexplored. This paper presents an empirical study investigating whether LLM-generated code can be executed successfully in a clean environment with only OS packages and using only the dependencies that the model specifies. We evaluate three state-of-the-art LLM coding agents (Claude Code, OpenAI Codex, and Gemini) across 300 projects generated from 100 standardized prompts in Python, JavaScript, and Java. We introduce a three-layer dependency framework (distinguishing between claimed, working, and runtime dependencies) to quantify execution reproducibility. Our results show that only 68.3% of projects execute out-of-the-box, with substantial variation across languages (Python 89.2%, Java 44.0%). We also find a 13.5 times average expansion from declared to actual runtime dependencies, revealing significant hidden dependencies.
翻译:大型语言模型(LLM)作为编码代理的兴起有望加速软件开发,但其对生成代码可复现性的影响在很大程度上仍未得到探索。本文通过实证研究,探究LLM生成的代码是否能在仅有操作系统软件包的纯净环境中,仅使用模型指定的依赖项成功执行。我们评估了三种最先进的LLM编码代理(Claude Code、OpenAI Codex和Gemini),测试范围涵盖基于Python、JavaScript和Java的100个标准化提示生成的300个项目。我们引入了一个三层依赖框架(区分声明依赖、工作依赖与运行时依赖)以量化执行可复现性。研究结果表明,仅有68.3%的项目能够开箱即执行,且不同语言间存在显著差异(Python 89.2%,Java 44.0%)。同时我们发现从声明依赖到实际运行时依赖存在平均13.5倍的膨胀,揭示了大量隐藏依赖的存在。