Handcrafted optimizers become prohibitively inefficient for complex black-box optimization (BBO) tasks. MetaBBO addresses this challenge by meta-learning to automatically configure optimizers for low-level BBO tasks, thereby eliminating heuristic dependencies. However, existing methods typically require extensive handcrafted training tasks to learn meta-strategies that generalize to target tasks, which poses a critical limitation for realistic applications with unknown task distributions. To overcome the issue, we propose the Adaptive meta Black-box Optimization Model (ABOM), which performs online parameter adaptation using solely optimization data from the target task, obviating the need for predefined task distributions. Unlike conventional metaBBO frameworks that decouple meta-training and optimization phases, ABOM introduces a closed-loop adaptive parameter learning mechanism, where parameterized evolutionary operators continuously self-update by leveraging generated populations during optimization. This paradigm shift enables zero-shot optimization: ABOM achieves competitive performance on synthetic BBO benchmarks and realistic unmanned aerial vehicle path planning problems without any handcrafted training tasks. Visualization studies reveal that parameterized evolutionary operators exhibit statistically significant search patterns, including natural selection and genetic recombination.
翻译:对于复杂的黑盒优化任务,手工设计的优化器在效率上存在严重不足。元黑盒优化通过元学习自动为底层黑盒优化任务配置优化器,从而消除对启发式设计的依赖。然而,现有方法通常需要大量手工构建的训练任务来学习可泛化至目标任务的元策略,这在任务分布未知的实际应用中构成关键限制。为克服此问题,我们提出自适应元黑盒优化模型,该模型仅利用目标任务的优化数据进行在线参数自适应,无需预定义任务分布。与传统元黑盒优化框架将元训练与优化阶段解耦不同,本模型引入闭环自适应参数学习机制,其中参数化进化算子通过利用优化过程中生成的种群持续自我更新。这一范式转变实现了零样本优化:该模型在合成黑盒优化基准测试和现实无人机路径规划问题上均取得具有竞争力的性能,且无需任何手工构建的训练任务。可视化研究表明,参数化进化算子展现出具有统计显著性的搜索模式,包括自然选择与基因重组。