We introduce CodeEvolve, an open-source framework that combines large language models (LLMs) with evolutionary search to synthesize high-performing algorithmic solutions. CodeEvolve couples an islands-based genetic algorithm with modular LLM orchestration, using execution feedback and task-specific metrics to guide selection and variation. Exploration and exploitation are balanced through context-aware recombination, adaptive meta-prompting, and targeted refinement of promising solutions. We evaluate CodeEvolve on benchmarks used to assess Google DeepMind's AlphaEvolve, and include direct comparisons with popular open-source frameworks for algorithmic discovery and heuristic design. Our results show that CodeEvolve achieves state-of-the-art (SOTA) performance on several tasks, with open-weight models often matching or exceeding closed-source baselines at a fraction of the compute cost. We provide extensive ablations, practical hyperparameter guidance, and release our framework and experimental results at https://github.com/inter-co/science-codeevolve.
翻译:本文介绍CodeEvolve,一个将大型语言模型(LLMs)与演化搜索相结合以合成高性能算法解决方案的开源框架。CodeEvolve采用基于岛屿的遗传算法与模块化LLM编排机制相耦合,利用执行反馈和任务特定指标来指导选择与变异过程。通过上下文感知重组、自适应元提示以及对潜力解决方案的定向优化,框架实现了探索与利用的平衡。我们在用于评估Google DeepMind的AlphaEvolve的基准测试集上对CodeEvolve进行评估,并包含与当前流行的算法发现及启发式设计开源框架的直接对比。实验结果表明,CodeEvolve在多项任务中取得了最先进(SOTA)性能,且开源模型通常能以极低的计算成本达到或超越闭源基线。我们提供了详尽的消融实验、实用超参数指导,并将框架及实验结果发布于https://github.com/inter-co/science-codeevolve。