We propose a novel method that solves global optimization problems in two steps: (1) perform a (exponential) power-$N$ transformation to the not-necessarily differentiable objective function $f$ to obtain $f_N$, and (2) optimize the Gaussian-smoothed $f_N$ with stochastic approximations. Under mild conditions on $f$, for any $\delta>0$, we prove that with a sufficiently large power $N_\delta$, this method converges to a solution in the $\delta$-neighborhood of $f$'s global maximum point. The convergence rate is $O(d^2\sigma^4\varepsilon^{-2})$, which is faster than both the standard and single-loop homotopy methods. Extensive experiments show that our method requires significantly fewer iterations than other compared algorithms to produce a high-quality solution.
翻译:本文提出一种新颖的全局优化方法,其包含两个步骤:(1) 对未必可微的目标函数 $f$ 进行(指数)幂-$N$ 变换得到 $f_N$;(2) 采用随机逼近方法优化经高斯平滑处理的 $f_N$。在 $f$ 满足温和条件下,对于任意 $\delta>0$,我们证明当幂次 $N_\delta$ 足够大时,该方法收敛至 $f$ 全局极大点的 $\delta$ 邻域内。收敛速率为 $O(d^2\sigma^4\varepsilon^{-2})$,优于标准同伦方法与单循环同伦方法。大量实验表明,本方法在获得高质量解时所需的迭代次数显著少于其他对比算法。