The evolution of Large Language Model (LLM) agents for software engineering (SWE) is constrained by the scarcity of verifiable datasets, a bottleneck stemming from the complexity of constructing executable environments across diverse languages. To address this, we introduce MEnvAgent, a Multi-language framework for automated Environment construction that facilitates scalable generation of verifiable task instances. MEnvAgent employs a multi-agent Planning-Execution-Verification architecture to autonomously resolve construction failures and integrates a novel Environment Reuse Mechanism that reduces computational overhead by incrementally patching historical environments. Evaluations on MEnvBench, a new benchmark comprising 1,000 tasks across 10 languages, demonstrate that MEnvAgent outperforms baselines, improving Fail-to-Pass (F2P) rates by 8.6% while reducing time costs by 43%. Additionally, we demonstrate the utility of MEnvAgent by constructing MEnvData-SWE, the largest open-source polyglot dataset of realistic verifiable Docker environments to date, alongside solution trajectories that enable consistent performance gains on SWE tasks across a wide range of models. Our code, benchmark, and dataset are available at https://github.com/ernie-research/MEnvAgent.
翻译:大型语言模型(LLM)智能体在软件工程(SWE)领域的发展受限于可验证数据集的稀缺性,这一瓶颈源于跨多种编程语言构建可执行环境的复杂性。为解决该问题,我们提出了MEnvAgent,一种支持可扩展生成可验证任务实例的多语言自动化环境构建框架。MEnvAgent采用多智能体“规划-执行-验证”架构以自主解决构建失败问题,并集成了创新的环境复用机制,通过增量式修补历史环境显著降低计算开销。在包含10种语言共1,000项任务的MEnvBench新基准测试中,MEnvAgent的表现优于基线方法,将“失败转通过”(F2P)率提升8.6%,同时降低43%的时间成本。此外,我们通过构建MEnvData-SWE验证了MEnvAgent的实用性——该数据集是目前最大的开源多语言真实可验证Docker环境数据集,其附带的解决方案轨迹能够帮助各类模型在软件工程任务中持续获得性能提升。我们的代码、基准测试及数据集已发布于https://github.com/ernie-research/MEnvAgent。