Global minimization is a fundamental challenge in optimization, especially in machine learning, where finding the global minimum of a function directly impacts model performance and convergence. This article introduces a novel optimization method that we called Super Gradient Descent, designed specifically for one-dimensional functions, guaranteeing convergence to the global minimum for any k-Lipschitz function defined on a closed interval [a, b]. Our approach addresses the limitations of traditional optimization algorithms, which often get trapped in local minima. In particular, we introduce the concept of global gradient which offers a robust solution for precise and well-guided global optimization. By focusing on the global minimization problem, this work bridges a critical gap in optimization theory, offering new insights and practical advancements in different optimization problems in particular Machine Learning problems like line search.
翻译:全局最小化是优化领域的一个基本挑战,在机器学习中尤其如此,因为寻找函数的全局最小值直接影响模型的性能和收敛性。本文介绍了一种新颖的优化方法,我们称之为超梯度下降法,该方法专为一维函数设计,保证对于定义在闭区间[a, b]上的任何k-李普希茨函数都能收敛到全局最小值。我们的方法解决了传统优化算法常陷入局部最小值的局限性。特别地,我们引入了全局梯度的概念,为精确且具有良好引导的全局优化提供了稳健的解决方案。通过聚焦于全局最小化问题,这项工作弥合了优化理论中的一个关键缺口,为不同的优化问题,特别是机器学习中的线搜索等问题,提供了新的见解和实际进展。