Recent advances in large language models (LLMs) have enabled software engineering agents to tackle complex code modification tasks. Most existing approaches rely on execution feedback from containerized environments, which require dependency-complete setup and physical execution of programs and tests. While effective, this paradigm is resource-intensive and difficult to maintain, substantially complicating agent training and limiting scalability. We propose SWE-World, a Docker-free framework that replaces physical execution environments with a learned surrogate for training and evaluating software engineering agents. SWE-World leverages LLM-based models trained on real agent-environment interaction data to predict intermediate execution outcomes and final test feedback, enabling agents to learn without interacting with physical containerized environments. This design preserves the standard agent-environment interaction loop while eliminating the need for costly environment construction and maintenance during agent optimization and evaluation. Furthermore, because SWE-World can simulate the final evaluation outcomes of candidate trajectories without real submission, it enables selecting the best solution among multiple test-time attempts, thereby facilitating effective test-time scaling (TTS) in software engineering tasks. Experiments on SWE-bench Verified demonstrate that SWE-World raises Qwen2.5-Coder-32B from 6.2\% to 52.0\% via Docker-free SFT, 55.0\% with Docker-free RL, and 68.2\% with further TTS. The code is available at https://github.com/RUCAIBox/SWE-World
翻译:近期大型语言模型(LLM)的进展使得软件工程智能体能够处理复杂的代码修改任务。现有方法大多依赖容器化环境的执行反馈,这需要完整的依赖项设置及程序与测试的物理执行。虽然有效,但该范式资源消耗大且维护困难,显著增加了智能体训练的复杂性并限制了可扩展性。我们提出SWE-World,一种无Docker框架,通过学习的替代模型取代物理执行环境来训练和评估软件工程智能体。SWE-World利用基于真实智能体-环境交互数据训练的LLM模型,预测中间执行结果和最终测试反馈,使智能体无需与物理容器化环境交互即可学习。该设计保留了标准的智能体-环境交互循环,同时消除了智能体优化与评估过程中昂贵的环境构建和维护需求。此外,由于SWE-World无需真实提交即可模拟候选轨迹的最终评估结果,它能够在多次测试尝试中选择最优解,从而促进软件工程任务中有效的测试时扩展(TTS)。在SWE-bench Verified上的实验表明,SWE-World通过无Docker监督微调将Qwen2.5-Coder-32B的性能从6.2%提升至52.0%,通过无Docker强化学习提升至55.0%,结合进一步TTS后达到68.2%。代码发布于https://github.com/RUCAIBox/SWE-World