Self-evolution methods enhance code generation through iterative "generate-verify-refine" cycles, yet existing approaches suffer from low exploration efficiency, failing to discover solutions with superior complexity within limited budgets. This inefficiency stems from initialization bias trapping evolution in poor solution regions, uncontrolled stochastic operations lacking feedback guidance, and insufficient experience utilization across tasks.To address these bottlenecks, we propose Controlled Self-Evolution (CSE), which consists of three key components. Diversified Planning Initialization generates structurally distinct algorithmic strategies for broad solution space coverage. Genetic Evolution replaces stochastic operations with feedback-guided mechanisms, enabling targeted mutation and compositional crossover. Hierarchical Evolution Memory captures both successful and failed experiences at inter-task and intra-task levels.Experiments on EffiBench-X demonstrate that CSE consistently outperforms all baselines across various LLM backbones. Furthermore, CSE achieves higher efficiency from early generations and maintains continuous improvement throughout evolution. Our code is publicly available at https://github.com/QuantaAlpha/EvoControl.
翻译:自进化方法通过迭代式的"生成-验证-精炼"循环增强代码生成能力,然而现有方法存在探索效率低下的问题,难以在有限预算内发现具有更优复杂度的解决方案。这种低效性源于三方面:初始化偏差将进化过程困于劣质解区域、缺乏反馈引导的随机操作不可控、以及跨任务经验利用不足。为突破这些瓶颈,我们提出可控自进化方法,其包含三个核心组件:多样化规划初始化生成结构各异的算法策略以实现广阔解空间覆盖;遗传进化以反馈引导机制替代随机操作,实现定向变异与组合交叉;层级进化记忆在任务间与任务内两个层面同时捕获成功与失败经验。在EffiBench-X基准上的实验表明,CSE在不同大语言模型基座上均持续超越所有基线方法。此外,CSE从进化早期即展现更高效率,并在整个进化过程中保持持续改进。代码已开源:https://github.com/QuantaAlpha/EvoControl。