Meta-Black-Box Optimization (MetaBBO) streamlines the automation of optimization algorithm design through meta-learning. It typically employs a bi-level structure: the meta-level policy undergoes meta-training to reduce the manual effort required in developing algorithms for low-level optimization tasks. The original MetaBox (2023) provided the first open-source framework for reinforcement learning-based single-objective MetaBBO. However, its relatively narrow scope no longer keep pace with the swift advancement in this field. In this paper, we introduce MetaBox-v2 (https://github.com/MetaEvo/MetaBox) as a milestone upgrade with four novel features: 1) a unified architecture supporting RL, evolutionary, and gradient-based approaches, by which we reproduce $23$ up-to-date baselines; 2) efficient parallelization schemes, which reduce the training/testing time by $10-40$x; 3) a comprehensive benchmark suite of $18$ synthetic/realistic tasks ($1900$+ instances) spanning single-objective, multi-objective, multi-model, and multi-task optimization scenarios; 4) plentiful and extensible interfaces for custom analysis/visualization and integrating to external optimization tools/benchmarks. To show the utility of MetaBox-v2, we carry out a systematic case study that evaluates the built-in baselines in terms of the optimization performance, generalization ability and learning efficiency. Valuable insights are concluded from thorough and detailed analysis for practitioners and those new to the field.
翻译:元黑盒优化(MetaBBO)通过元学习实现了优化算法设计的自动化。其通常采用双层结构:元级策略经过元训练,以减少为底层优化任务开发算法所需的人工工作量。最初的MetaBox(2023)提供了首个基于强化学习的单目标MetaBBO开源框架。然而,其相对狭窄的范围已无法跟上该领域的快速发展。本文中,我们推出MetaBox-v2(https://github.com/MetaEvo/MetaBox)作为一次里程碑式的升级,它具有四个新特性:1)支持强化学习、进化及基于梯度方法的统一架构,藉此我们复现了$23$个最新基线方法;2)高效的并行化方案,将训练/测试时间减少了$10-40$倍;3)涵盖单目标、多目标、多模型及多任务优化场景的$18$个合成/真实任务($1900$+实例)的综合基准套件;4)丰富且可扩展的接口,用于自定义分析/可视化以及集成外部优化工具/基准。为展示MetaBox-v2的实用性,我们进行了一项系统性案例研究,从优化性能、泛化能力和学习效率三个方面评估了内置基线方法。通过全面细致的分析,为从业者及领域新手总结了有价值的见解。