Global minimization is a fundamental challenge in optimization, especially in machine learning, where finding the global minimum of a function directly impacts model performance and convergence. This report introduces a novel optimization method that we called Super Gradient Descent, designed specifically for one-dimensional functions, guaranteeing convergence to the global minimum for any k-Lipschitz function defined on a closed interval [a, b]. Our approach addresses the limitations of traditional optimization algorithms, which often get trapped in local minima. In particular, we introduce the concept of global gradient which offers a robust solution for precise and well-guided global optimization. By focusing on the global minimization problem, this work bridges a critical gap in optimization theory, offering new insights and practical advancements in different optimization problems in particular Machine Learning problems like line search.
翻译:全局最小化是优化领域的一项基础性挑战,尤其在机器学习中,寻找函数的全局最小值直接影响模型的性能与收敛性。本报告介绍了一种我们称之为超梯度下降法的新型优化方法,该方法专为一维函数设计,可保证在闭区间[a, b]上定义的任意k-李普希茨函数收敛至全局最小值。我们的方法解决了传统优化算法常陷入局部最小值的局限性。特别地,我们引入了全局梯度的概念,为精确且导向明确的全局优化提供了稳健解决方案。通过聚焦全局最小化问题,本工作弥合了优化理论中的一个关键缺口,为不同优化问题(特别是机器学习中的线搜索等问题)带来了新的理论见解与实践进展。