In this paper, we study consensus-based optimization (CBO), which is a multi-agent metaheuristic derivative-free optimization method that can globally minimize nonconvex nonsmooth functions and is amenable to theoretical analysis. Based on an experimentally supported intuition that, on average, CBO performs a gradient descent of the squared Euclidean distance to the global minimizer, we devise a novel technique for proving the convergence to the global minimizer in mean-field law for a rich class of objective functions. The result unveils internal mechanisms of CBO that are responsible for the success of the method. In particular, we prove that CBO performs a convexification of a large class of optimization problems as the number of optimizing agents goes to infinity. Furthermore, we improve prior analyses by requiring mild assumptions about the initialization of the method and by covering objectives that are merely locally Lipschitz continuous. As a core component of this analysis, we establish a quantitative nonasymptotic Laplace principle, which may be of independent interest. From the result of CBO convergence in mean-field law, it becomes apparent that the hardness of any global optimization problem is necessarily encoded in the rate of the mean-field approximation, for which we provide a novel probabilistic quantitative estimate. The combination of these results allows to obtain probabilistic global convergence guarantees of the numerical CBO method.
翻译:本文研究基于共识的优化方法(CBO),这是一种多智能体元启发式无导数优化方法,能够全局最小化非凸非光滑函数,并适用于理论分析。基于实验支持的直观认识——CBO在平均意义上执行的是到全局最小点的欧氏距离平方的梯度下降,我们设计了一种新颖的技术,用于证明在平均场意义下,对于一大类目标函数,CBO能够收敛到全局最小点。该结果揭示了CBO方法成功的内在机制。特别地,我们证明了当优化智能体数量趋于无穷时,CBO能够对一大类优化问题实现凸化。此外,我们通过仅要求对方法初始化的温和假设,以及覆盖仅局部Lipschitz连续的目标函数,改进了先前的分析。作为该分析的核心组成部分,我们建立了一个定量的非渐近拉普拉斯原理,这可能具有独立的学术价值。从CBO在平均场意义下的收敛结果可以明显看出,任何全局优化问题的困难性必然编码在平均场近似的速率中,为此我们提供了一个新颖的概率定量估计。这些结果的结合使得我们能够获得数值CBO方法的概率性全局收敛保证。