Self-evolution methods enhance code generation through iterative "generate-verify-refine" cycles, yet existing approaches suffer from low exploration efficiency, failing to discover solutions with superior complexity within limited budgets. This inefficiency stems from initialization bias trapping evolution in poor solution regions, uncontrolled stochastic operations lacking feedback guidance, and insufficient experience utilization across tasks. To address these bottlenecks, we propose Controlled Self-Evolution (CSE), which consists of three key components. Diversified Planning Initialization generates structurally distinct algorithmic strategies for broad solution space coverage. Genetic Evolution replaces stochastic operations with feedback-guided mechanisms, enabling targeted mutation and compositional crossover. Hierarchical Evolution Memory captures both successful and failed experiences at inter-task and intra-task levels. Experiments on EffiBench-X demonstrate that CSE consistently outperforms all baselines across various LLM backbones. Furthermore, CSE achieves higher efficiency from early generations and maintains continuous improvement throughout evolution. Our code is publicly available at https://github.com/QuantaAlpha/EvoControl.
翻译:自进化方法通过"生成-验证-优化"的迭代循环增强代码生成能力,然而现有方法存在探索效率低下的问题,难以在有限计算资源内发现具有更优复杂度的解决方案。这种低效性源于三个核心瓶颈:初始化偏差使进化过程陷入次优解区域、缺乏反馈引导的随机操作不可控、以及跨任务经验利用不足。为解决这些瓶颈,我们提出受控自进化方法,该方法包含三个关键组件:多样化规划初始化生成结构各异的算法策略以实现广阔解空间覆盖;遗传进化将随机操作替换为反馈引导机制,实现定向突变与组合交叉;分层进化记忆在任务间与任务内两个层级同时捕获成功与失败经验。在EffiBench-X基准上的实验表明,CSE在不同大语言模型基座上均持续优于所有基线方法。此外,CSE从早期进化世代即展现出更高效率,并在整个进化过程中保持持续改进能力。我们的代码已开源:https://github.com/QuantaAlpha/EvoControl。