Natural gradient descent (NGD) is a powerful optimization technique for machine learning, but the computational complexity of the inverse Fisher information matrix limits its application in training deep neural networks. To overcome this challenge, we propose a novel optimization method for training deep neural networks called structured natural gradient descent (SNGD). Theoretically, we demonstrate that optimizing the original network using NGD is equivalent to using fast gradient descent (GD) to optimize the reconstructed network with a structural transformation of the parameter matrix. Thereby, we decompose the calculation of the global Fisher information matrix into the efficient computation of local Fisher matrices via constructing local Fisher layers in the reconstructed network to speed up the training. Experimental results on various deep networks and datasets demonstrate that SNGD achieves faster convergence speed than NGD while retaining comparable solutions. Furthermore, our method outperforms traditional GDs in terms of efficiency and effectiveness. Thus, our proposed method has the potential to significantly improve the scalability and efficiency of NGD in deep learning applications. Our source code is available at https://github.com/Chaochao-Lin/SNGD.
翻译:自然梯度下降(NGD)是一种强大的机器学习优化技术,但逆Fisher信息矩阵的计算复杂性限制了其在训练深度神经网络中的应用。为克服这一挑战,我们提出了一种名为结构化自然梯度下降(SNGD)的新型深度神经网络训练优化方法。理论上,我们证明了使用NGD优化原始网络等价于使用快速梯度下降(GD)优化经过参数矩阵结构变换的重构网络。由此,我们通过在重构网络中构建局部Fisher层,将全局Fisher信息矩阵的计算分解为局部Fisher矩阵的高效计算,从而加速训练。在不同深度网络和数据集上的实验结果表明,SNGD在保持相当解质量的同时,比NGD实现了更快的收敛速度。此外,我们的方法在效率和效果方面均优于传统GD。因此,所提出的方法有望显著提升NGD在深度学习应用中的可扩展性和效率。我们的源代码公开于 https://github.com/Chaochao-Lin/SNGD。