Stochastic Gradient (SG) is the defacto iterative technique to solve stochastic optimization (SO) problems with a smooth (non-convex) objective $f$ and a stochastic first-order oracle. SG's attractiveness is due in part to its simplicity of executing a single step along the negative subsampled gradient direction to update the incumbent iterate. In this paper, we question SG's choice of executing a single step as opposed to multiple steps between subsample updates. Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA) where, during each iteration, a "deterministic solver" executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency. RA thus rigorizes what is appealing for implementation -- during each iteration, "plug in" a solver, e.g., L-BFGS line search or Newton-CG, as is, and solve only to the extent necessary. We develop a complete theory using relative error of the observed gradients as the principal object, demonstrating that almost sure and $L_1$ consistency of RA are preserved under especially weak conditions when sample sizes are increased at appropriate rates. We also characterize the iteration and oracle complexity (for linear and sub-linear solvers) of RA, and identify a practical termination criterion leading to optimal complexity rates. To subsume non-convex $f$, we present a certain "random central limit theorem" that incorporates the effect of curvature across all first-order critical points, demonstrating that the asymptotic behavior is described by a certain mixture of normals. The message from our numerical experiments is that the ability of RA to incorporate existing second-order deterministic solvers in a strategic manner might be important from the standpoint of dispensing with hyper-parameter tuning.
翻译:随机梯度法(SG)是求解具有光滑(非凸)目标函数$f$和随机一阶预言机的随机优化问题的事实标准迭代技术。SG的吸引力部分源于其简单的单步操作——沿负子采样梯度方向更新当前迭代点。本文质疑SG在子样本更新之间仅执行单步而非多步的选择。该研究自然将SG推广为回顾性近似法(RA):每次迭代中,"确定性求解器"在子采样确定性问题上可能执行多步操作,当从统计效率角度认为进一步求解不必要时停止。RA因此为实际应用提供了理论支撑——每次迭代中可直接"代入"求解器(如L-BFGS线搜索或牛顿共轭梯度法),仅需在必要范围内求解。我们以观测梯度的相对误差为核心对象建立完整理论,证明在样本量以适当速率递增时,RA的几乎必然一致性和$L_1$一致性可在特别宽松的条件下保持。同时刻画了RA的迭代复杂度与预言机复杂度(针对线性和次线性求解器),并确定达到最优复杂度率的实用终止准则。为涵盖非凸$f$,我们提出一种融合所有一阶临界点曲率效应的"随机中心极限定理",证明其渐近行为由特定正态混合分布描述。数值实验表明,RA策略性地集成现有二阶确定性求解器的能力,可能对免除超参数调优具有重要意义。