Zero-shot optimization involves optimizing a target task that was not seen during training, aiming to provide the optimal solution without or with minimal adjustments to the optimizer. It is crucial to ensure reliable and robust performance in various applications. Current optimizers often struggle with zero-shot optimization and require intricate hyperparameter tuning to adapt to new tasks. To address this, we propose a Pretrained Optimization Model (POM) that leverages knowledge gained from optimizing diverse tasks, offering efficient solutions to zero-shot optimization through direct application or fine-tuning with few-shot samples. Evaluation on the BBOB benchmark and two robot control tasks demonstrates that POM outperforms state-of-the-art black-box optimization methods, especially for high-dimensional tasks. Fine-tuning POM with a small number of samples and budget yields significant performance improvements. Moreover, POM demonstrates robust generalization across diverse task distributions, dimensions, population sizes, and optimization horizons. For code implementation, see https://github.com/ninja-wm/POM/.
翻译:零样本优化旨在优化训练过程中未见过的目标任务,力求在不调整或仅微调优化器的情况下提供最优解。确保其在各类应用中的可靠性与鲁棒性至关重要。现有优化器常难以应对零样本优化问题,且需通过复杂的超参数调优以适应新任务。为此,我们提出一种预训练优化模型,该模型利用从多样化任务优化中获取的知识,通过直接应用或少量样本微调,为零样本优化提供高效解决方案。在BBOB基准测试及两项机器人控制任务上的评估表明,POM在性能上优于当前最先进的黑盒优化方法,尤其在高维任务中表现突出。使用少量样本和计算预算对POM进行微调可带来显著的性能提升。此外,POM在不同任务分布、维度、种群规模和优化时域上均展现出强大的泛化能力。代码实现请访问 https://github.com/ninja-wm/POM/。