This research presents a novel application of Evolutionary Computation to the domain of residential electric vehicle (EV) energy management. While reinforcement learning (RL) achieves high performance in vehicle-to-grid (V2G) optimization, it typically produces opaque "black-box" neural networks that are difficult for consumers and regulators to audit. Addressing this interpretability gap, we propose a program search framework that leverages Large Language Models (LLMs) as intelligent mutation operators within an iterative prompt-evaluation-repair loop. Utilizing the high-fidelity EV2Gym simulation environment as a fitness function, the system undergoes successive refinement cycles to synthesize executable Python policies that balance profit maximization, user comfort, and physical safety constraints. We benchmark four prompting strategies: Imitation, Reasoning, Hybrid and Runtime, evaluating their ability to discover adaptive control logic. Results demonstrate that the Hybrid strategy produces concise, human-readable heuristics that achieve 118% of the baseline profit, effectively discovering complex behaviors like anticipatory arbitrage and hysteresis without explicit programming. This work establishes LLM-driven Evolutionary Computation as a practical approach for generating EV charging control policies that are transparent, inspectable, and suitable for real residential deployment.
翻译:本研究提出了一种进化计算在住宅电动汽车能量管理领域的新颖应用。尽管强化学习在车联网优化中实现了高性能,但其通常产生不透明的"黑箱"神经网络,难以被消费者和监管机构审查。为弥补这一可解释性差距,我们提出了一种程序搜索框架,该框架利用大型语言模型作为智能变异算子,在迭代的提示-评估-修复循环中运行。系统以高保真EV2Gym仿真环境作为适应度函数,通过连续的精炼循环合成可执行的Python策略,以平衡利润最大化、用户舒适度和物理安全约束。我们对四种提示策略进行了基准测试:模仿、推理、混合和运行时策略,评估它们发现自适应控制逻辑的能力。结果表明,混合策略能生成简洁、人类可读的启发式规则,实现基线利润的118%,有效发现了如预期套利和滞后等复杂行为,而无需显式编程。这项工作确立了LLM驱动的进化计算作为一种实用方法,可用于生成透明、可检查且适用于实际住宅部署的电动汽车充电控制策略。