In this paper we study consensus-based optimization (CBO), which is a multi-agent metaheuristic derivative-free optimization method that can globally minimize nonconvex nonsmooth functions and is amenable to theoretical analysis. Based on an experimentally supported intuition that, on average, CBO performs a gradient descent of the squared Euclidean distance to the global minimizer, we devise a novel technique for proving the convergence to the global minimizer in mean-field law for a rich class of objective functions. The result unveils internal mechanisms of CBO that are responsible for the success of the method. In particular, we prove that CBO performs a convexification of a large class of optimization problems as the number of optimizing agents goes to infinity. Furthermore, we improve prior analyses by requiring mild assumptions about the initialization of the method and by covering objectives that are merely locally Lipschitz continuous. As a core component of this analysis, we establish a quantitative nonasymptotic Laplace principle, which may be of independent interest. From the result of CBO convergence in mean-field law, it becomes apparent that the hardness of any global optimization problem is necessarily encoded in the rate of the mean-field approximation, for which we provide a novel probabilistic quantitative estimate. The combination of these results allows to obtain probabilistic global convergence guarantees of the numerical CBO method.
翻译:本文研究了基于共识的优化(CBO),这是一种多智能体元启发式无导数优化方法,能够全局最小化非凸非光滑函数,且适用于理论分析。基于实验支持的直觉——即平均而言,CBO执行的是到全局最小化器的欧几里得距离平方的梯度下降——我们设计了一种新颖的技术,用于证明在平均场定律下对于一类丰富目标函数向全局最小化器的收敛。这一结果揭示了CBO方法成功的内在机制。特别地,我们证明了当优化智能体数量趋向无穷时,CBO对一大类优化问题执行了凸化处理。此外,我们通过仅需对方法初始化提出温和假设,并覆盖仅局部利普希茨连续的目标函数,改进了先前的分析。作为本分析的核心组成部分,我们建立了一个定量的非渐近拉普拉斯原理,该原理可能具有独立的研究兴趣。从CBO在平均场定律中的收敛结果可以明显看出,任何全局优化问题的难度必然体现在平均场近似的速率中,我们为此提供了一种新颖的概率定量估计。这些结果的结合使得能够获得数值CBO方法的概率性全局收敛保证。