Handcrafted optimizers become prohibitively inefficient for complex black-box optimization (BBO) tasks. MetaBBO addresses this challenge by meta-learning to automatically configure optimizers for low-level BBO tasks, thereby eliminating heuristic dependencies. However, existing methods typically require extensive handcrafted training tasks to learn meta-strategies that generalize to target tasks, which poses a critical limitation for realistic applications with unknown task distributions. To overcome the issue, we propose the Adaptive meta Black-box Optimization Model (ABOM), which performs online parameter adaptation using solely optimization data from the target task, obviating the need for predefined task distributions. Unlike conventional metaBBO frameworks that decouple meta-training and optimization phases, ABOM introduces a closed-loop adaptive parameter learning mechanism, where parameterized evolutionary operators continuously self-update by leveraging generated populations during optimization. This paradigm shift enables zero-shot optimization: ABOM achieves competitive performance on synthetic BBO benchmarks and realistic unmanned aerial vehicle path planning problems without any handcrafted training tasks. Visualization studies reveal that parameterized evolutionary operators exhibit statistically significant search patterns, including natural selection and genetic recombination.
翻译:针对复杂黑盒优化任务,人工设计的优化器存在效率瓶颈。元黑盒优化通过元学习自动配置底层优化任务的优化器,从而消除启发式依赖。然而,现有方法通常需要大量人工构建的训练任务来学习可泛化至目标任务的元策略,这对任务分布未知的实际应用构成关键限制。为突破此局限,我们提出自适应元黑盒优化模型,该模型仅利用目标任务的优化数据在线调整参数,无需预定义任务分布。与传统元黑盒优化框架将元训练与优化阶段解耦不同,本模型引入闭环自适应参数学习机制:参数化进化算子通过利用优化过程中生成的种群持续自我更新。这一范式转变实现了零样本优化:在合成黑盒优化基准测试和实际无人机路径规划问题中,本模型在无需任何人工训练任务的情况下均取得优越性能。可视化研究表明,参数化进化算子展现出具有统计显著性的搜索模式,包括自然选择与基因重组。