Diagonal preconditioners are computationally feasible approximate to second-order optimizers, which have shown significant promise in accelerating training of deep learning models. Two predominant approaches are based on Adam and Gauss-Newton (GN) methods: the former leverages statistics of current gradients and is the de-factor optimizers for neural networks, and the latter uses the diagonal elements of the Gauss-Newton matrix and underpins some of the recent diagonal optimizers such as Sophia. In this work, we compare these two diagonal preconditioning methods through the lens of two key factors: the choice of basis in the preconditioner, and the impact of gradient noise from mini-batching. To gain insights, we analyze these optimizers on quadratic objectives and logistic regression under all four quadrants. We show that regardless of the basis, there exist instances where Adam outperforms both GN$^{-1}$ and GN$^{-1/2}$ in full-batch settings. Conversely, in the stochastic regime, Adam behaves similarly to GN$^{-1/2}$ for linear regression under a Gaussian data assumption. These theoretical results are supported by empirical studies on both convex and non-convex objectives.
翻译:对角预处理器是二阶优化器在计算上可行的近似方法,其在加速深度学习模型训练方面展现出显著潜力。两种主流方法基于Adam与Gauss-Newton(GN)方法:前者利用当前梯度的统计量,已成为神经网络事实上的优化器;后者采用Gauss-Newton矩阵的对角元素,构成了Sophia等近期对角优化器的理论基础。本研究通过两个关键因素比较这两种对角预处理方法:预处理器中的基选择,以及小批量梯度噪声的影响。为深入理解,我们在所有四个象限下对二次目标函数和逻辑回归任务分析了这些优化器。研究表明,无论基如何选择,在完整批次训练场景中均存在Adam优于GN$^{-1}$与GN$^{-1/2}$的实例。相反,在随机训练机制下,对于高斯数据假设下的线性回归问题,Adam表现出与GN$^{-1/2}$相似的行为。这些理论结果在凸与非凸目标函数的实证研究中均得到了验证。