Self-evolution methods enhance code generation through iterative "generate-verify-refine" cycles, yet existing approaches suffer from low exploration efficiency, failing to discover solutions with superior complexity within limited budgets. This inefficiency stems from initialization bias trapping evolution in poor solution regions, uncontrolled stochastic operations lacking feedback guidance, and insufficient experience utilization across tasks. To address these bottlenecks, we propose Controlled Self-Evolution (CSE), which consists of three key components. Diversified Planning Initialization generates structurally distinct algorithmic strategies for broad solution space coverage. Genetic Evolution replaces stochastic operations with feedback-guided mechanisms, enabling targeted mutation and compositional crossover. Hierarchical Evolution Memory captures both successful and failed experiences at inter-task and intra-task levels. Experiments on EffiBench-X demonstrate that CSE consistently outperforms all baselines across various LLM backbones. Furthermore, CSE achieves higher efficiency from early generations and maintains continuous improvement throughout evolution. Our code is publicly available at https://github.com/QuantaAlpha/EvoControl.
翻译:自演化方法通过迭代的"生成-验证-精炼"循环增强代码生成能力,然而现有方法存在探索效率低下的问题,难以在有限资源内发现具有更优复杂度的解决方案。这种低效性源于三个核心瓶颈:初始化偏差将演化过程困于劣质解区域、缺乏反馈引导的随机操作不可控、以及跨任务经验利用不足。为解决这些瓶颈,我们提出可控自演化方法,其包含三个关键组件:多样化规划初始化生成结构各异的算法策略以实现广阔解空间覆盖;遗传演化以反馈引导机制替代随机操作,实现定向变异与组合交叉;分层演化记忆在任务间与任务内两个层面同时捕获成功与失败经验。在EffiBench-X基准上的实验表明,CSE在不同大语言模型骨干上均持续超越所有基线方法。此外,CSE从早期演化世代即展现出更高效率,并在整个演化过程中保持持续改进。我们的代码已公开于https://github.com/QuantaAlpha/EvoControl。