Dynamic environments pose great challenges for expensive optimization problems, as the objective functions of these problems change over time and thus require remarkable computational resources to track the optimal solutions. Although data-driven evolutionary optimization and Bayesian optimization (BO) approaches have shown promise in solving expensive optimization problems in static environments, the attempts to develop such approaches in dynamic environments remain rarely unexplored. In this paper, we propose a simple yet effective meta-learning-based optimization framework for solving expensive dynamic optimization problems. This framework is flexible, allowing any off-the-shelf continuously differentiable surrogate model to be used in a plug-in manner, either in data-driven evolutionary optimization or BO approaches. In particular, the framework consists of two unique components: 1) the meta-learning component, in which a gradient-based meta-learning approach is adopted to learn experience (effective model parameters) across different dynamics along the optimization process. 2) the adaptation component, where the learned experience (model parameters) is used as the initial parameters for fast adaptation in the dynamic environment based on few shot samples. By doing so, the optimization process is able to quickly initiate the search in a new environment within a strictly restricted computational budget. Experiments demonstrate the effectiveness of the proposed algorithm framework compared to several state-of-the-art algorithms on common benchmark test problems under different dynamic characteristics.
翻译:动态环境为昂贵优化问题带来了巨大挑战,因为这些问题的目标函数随时间变化,需要大量计算资源来追踪最优解。尽管数据驱动的进化优化和贝叶斯优化方法在静态环境中解决昂贵优化问题已展现出潜力,但在动态环境中开发此类方法的尝试仍鲜有探索。本文提出一种简单而有效的基于元学习的优化框架,用于求解昂贵的动态优化问题。该框架具有灵活性,允许以即插即用方式使用任何现成的连续可微代理模型,无论是应用于数据驱动的进化优化还是贝叶斯优化方法。该框架包含两个独特组件:1)元学习组件:采用基于梯度的元学习方法,在优化过程中学习跨越不同动态环境的经验(有效模型参数);2)自适应组件:将学习到的经验(模型参数)作为初始参数,基于少量样本在动态环境中实现快速适应。通过这种方式,优化过程能够在严格受限的计算预算内快速启动新环境中的搜索。实验表明,在不同动态特性的常见基准测试问题上,所提出的算法框架相较于多种先进算法具有显著优势。