The learning rate in stochastic gradient methods is a critical hyperparameter that is notoriously costly to tune via standard grid search, especially for training modern large-scale models with billions of parameters. We identify a theoretical advantage of learning rate annealing schemes that decay the learning rate to zero at a polynomial rate, such as the widely-used cosine schedule, by demonstrating their increased robustness to initial parameter misspecification due to a coarse grid search. We present an analysis in a stochastic convex optimization setup demonstrating that the convergence rate of stochastic gradient descent with annealed schedules depends sublinearly on the multiplicative misspecification factor $\rho$ (i.e., the grid resolution), achieving a rate of $O(\rho^{1/(2p+1)}/\sqrt{T})$ where $p$ is the degree of polynomial decay and $T$ is the number of steps, in contrast to the $O(\rho/\sqrt{T})$ rate that arises with fixed stepsizes and exhibits a linear dependence on $\rho$. Experiments confirm the increased robustness compared to tuning with a fixed stepsize, that has significant implications for the computational overhead of hyperparameter search in practical training scenarios.
翻译:随机梯度方法中的学习率是一个关键超参数,通过标准网格搜索进行调优的成本极高,尤其是在训练具有数十亿参数的现代大规模模型时。我们揭示了学习率退火方案的理论优势:这些方案以多项式速率将学习率衰减至零(例如广泛使用的余弦调度),通过分析证明其对因粗粒度网格搜索导致的初始参数设定误差具有更强的鲁棒性。我们在随机凸优化框架下进行分析,结果表明采用退火调度的随机梯度下降法的收敛速率对乘性设定误差因子 $\rho$(即网格分辨率)具有次线性依赖关系,达到 $O(\rho^{1/(2p+1)}/\sqrt{T})$ 的收敛速率,其中 $p$ 为多项式衰减阶数,$T$ 为迭代步数。相比之下,固定步长方案产生的 $O(\rho/\sqrt{T})$ 速率对 $\rho$ 呈现线性依赖。实验证实了相较于固定步长调参,该方法具有更强的鲁棒性,这对实际训练场景中超参数搜索的计算开销具有重要影响。