Goldstein's 1977 idealized iteration for minimizing a Lipschitz objective fixes a distance - the step size - and relies on a certain approximate subgradient. That "Goldstein subgradient" is the shortest convex combination of objective gradients at points within that distance of the current iterate. A recent implementable Goldstein-style algorithm allows a remarkable complexity analysis (Zhang et al. 2020), and a more sophisticated variant (Davis and Jiang, 2022) leverages typical objective geometry to force near-linear convergence. To explore such methods, we introduce a new modulus, based on Goldstein subgradients, that robustly measures the slope of a Lipschitz function. We relate near-linear convergence of Goldstein-style methods to linear growth of this modulus at minimizers. We illustrate the idea computationally with a simple heuristic for Lipschitz minimization.
翻译:Goldstein于1977年提出的极小化Lipschitz目标函数的理想化迭代方法,通过固定步长(即距离)并依赖特定形式的近似次梯度。该"Goldstein次梯度"是当前迭代点邻域内目标函数梯度点的最短凸组合。近期一种可实现的Goldstein型算法展现了卓越的复杂度分析能力(Zhang等, 2020),而更具技巧性的变体(Davis与Jiang, 2022)则利用典型目标函数的几何结构实现近线性收敛。为探究此类方法,我们引入一种基于Goldstein次梯度的新模量,该模量可稳健度量Lipschitz函数的陡峭程度。我们揭示了Goldstein型方法的近线性收敛性与此模量在极小点处线性增长特性之间的关联,并通过一个简化的Lipschitz极小化启发式算法对该思想进行了计算验证。