This paper introduces a new stochastic optimization method based on the regularized Fisher information matrix (FIM), named SOFIM, which can efficiently utilize the FIM to approximate the Hessian matrix for finding Newton's gradient update in large-scale stochastic optimization of machine learning models. It can be viewed as a variant of natural gradient descent (NGD), where the challenge of storing and calculating the full FIM is addressed through making use of the regularized FIM and directly finding the gradient update direction via Sherman-Morrison matrix inversion. Additionally, like the popular Adam method, SOFIM uses the first moment of the gradient to address the issue of non-stationary objectives across mini-batches due to heterogeneous data. The utilization of the regularized FIM and Sherman-Morrison matrix inversion leads to the improved convergence rate with the same space and time complexities as stochastic gradient descent (SGD) with momentum. The extensive experiments on training deep learning models on several benchmark image classification datasets demonstrate that the proposed SOFIM outperforms SGD with momentum and several state-of-the-art Newton optimization methods, such as Nystrom-SGD, L-BFGS, and AdaHessian, in term of the convergence speed for achieving the pre-specified objectives of training and test losses as well as test accuracy.
翻译:本文提出了一种基于正则化Fisher信息矩阵(FIM)的新型随机优化方法SOFIM,该方法通过高效利用FIM近似Hessian矩阵,为机器学习模型的大规模随机优化提供了Newton梯度更新。SOFIM可视为自然梯度下降(NGD)的变体,其通过利用正则化FIM并借助Sherman-Morrison矩阵逆直接计算梯度更新方向,解决了完整FIM存储与计算瓶颈。与流行的Adam方法类似,SOFIM利用梯度的一阶矩处理因数据异质性导致的各小批量间非平稳目标函数问题。正则化FIM与Sherman-Morrison矩阵逆的协同使用,使得该方法在保持与带动量的随机梯度下降(SGD)相同空间与时间复杂度的情况下,实现了更优的收敛速度。在多个基准图像分类数据集上进行的深度学习模型训练实验表明,相较于带动量的SGD及Nyström-SGD、L-BFGS、AdaHessian等先进Newton优化方法,所提出的SOFIM在达到预设训练损失、测试损失及测试精度目标的收敛速度上均表现更优。