In this work, we investigate a stochastic control framework for global optimization over both Euclidean spaces and the Wasserstein space of probability measures, where the objective function may be non-convex and/or non-differentiable. In the Euclidean setting, the original minimization problem is approximated by a family of regularized stochastic control problems; using dynamic programming, we analyze the associated Hamilton--Jacobi--Bellman equations and obtain tractable representations via the Cole--Hopf transformation and the Feynman--Kac formula. For optimization over probability measures, we formulate a regularized mean-field control problem characterized by a master equation, and further approximate it by controlled $N$-particle systems. We establish that, as the regularization parameter tends to zero (and as the particle number tends to infinity for the optimization over probability measures), the value of the control problem converges to the global minimum of the original objective. Building on the resulting probabilistic representations, Monte Carlo-based numerical schemes are proposed and numerical experiments are reported to illustrate the effectiveness of the methods and to support the theoretical convergence rates.
翻译:本文研究了一种用于欧几里得空间与概率测度Wasserstein空间全局优化的随机控制框架,其中目标函数可能为非凸和/或不可微。在欧几里得空间设定下,原始最小化问题通过一族正则化随机控制问题逼近;利用动态规划方法,我们分析了相关的Hamilton--Jacobi--Bellman方程,并通过Cole--Hopf变换与Feynman--Kac公式获得可处理的表示形式。针对概率测度空间的优化问题,我们构建了以主方程表征的正则化平均场控制问题,并进一步通过受控$N$-粒子系统进行逼近。我们证明当正则化参数趋于零时(对于概率测度优化问题还需粒子数趋于无穷),控制问题的值函数将收敛至原始目标的全局最小值。基于所得概率表示,我们提出了基于蒙特卡洛的数值计算方案,并通过数值实验验证了方法的有效性,同时为理论收敛速率提供了实证支持。