The use of stochastic differential equations in multi-objective optimization has been limited, in practice, by two persistent gaps: incomplete stability analyses and the absence of accessible implementations. We revisit a drift--diffusion model for unconstrained vector optimization in which the drift is induced by a common descent direction and the diffusion term preserves exploratory behavior. The main theoretical contribution is a self-contained Lyapunov analysis establishing global existence, pathwise uniqueness, and non-explosion under a dissipativity condition, together with positive recurrence under an additional coercivity assumption. We also derive an Euler--Maruyama discretization and implement the resulting iteration as a \emph{pymoo}-compatible algorithm -- \emph{pymoo} being an open-source Python framework for multi-objective optimization -- with an interactive \emph{PymooLab} front-end for reproducible experiments. Empirical results on DTLZ2 with objective counts from three to fifteen indicate a consistent trade-off: compared with established evolutionary baselines, the method is less competitive in low-dimensional regimes but remains a viable option under restricted evaluation budgets in higher-dimensional settings. Taken together, these observations suggest that stochastic drift--diffusion search occupies a mathematically tractable niche alongside population-based heuristics -- not as a replacement, but as an alternative whose favorable properties are amenable to rigorous analysis.
翻译:多目标优化中随机微分方程的应用在实践中一直受到两个持续存在的局限:不完整的稳定性分析以及缺乏易于获取的实现。我们重新审视了无约束向量优化中的漂移-扩散模型,其中漂移由共同下降方向诱导,而扩散项保持了探索行为。主要理论贡献在于一个自洽的Lyapunov分析,该分析在耗散性条件下建立了全局存在性、路径唯一性与非爆炸性,并在附加的强制假设下证明了正递归性。我们还推导了Euler--Maruyama离散化方案,并将所得迭代实现为一个与\emph{pymoo}兼容的算法——\emph{pymoo}是一个用于多目标优化的开源Python框架——同时配备了交互式\emph{PymooLab}前端以支持可重复实验。在目标数量从三到十五的DTLZ2问题上的实证结果表明了一种稳定的权衡:与成熟的进化基准方法相比,该方法在低维环境中竞争力较弱,但在高维设置下受限于评估预算时仍是一个可行的选择。综合来看,这些观察表明随机漂移-扩散搜索在数学可处理的特定领域与基于种群的启发式方法并存——并非作为替代,而是作为一种其优良特性可进行严格分析的替代方案。