Gradient regularization (GR) has been shown to improve the generalizability of trained models. While Natural Gradient Descent has been shown to accelerate optimization in the initial phase of training, little attention has been paid to how the training dynamics of second-order optimizers can benefit from GR. In this work, we propose Gradient-Regularized Natural Gradients (GRNG), a family of scalable second-order optimizers that integrate explicit gradient regularization with natural gradient updates. Our framework provides two complementary algorithms: a frequentist variant that avoids explicit inversion of the Fisher Information Matrix (FIM) via structured approximations, and a Bayesian variant based on a Regularized-Kalman formulation that eliminates the need for FIM inversion entirely. We establish convergence guarantees for GRNG, showing that gradient regularization improves stability and enables convergence to global minima. Empirically, we demonstrate that GRNG consistently enhances both optimization speed and generalization compared to first-order methods (SGD, AdamW) and second-order baselines (K-FAC, Sophia), with strong results on vision and language benchmarks. Our findings highlight gradient regularization as a principled and practical tool to unlock the robustness of natural gradient methods for large-scale deep learning.
翻译:梯度正则化(GR)已被证明能够提升训练模型的泛化能力。虽然自然梯度下降法在训练初期被证实可加速优化过程,但二阶优化器的训练动态如何受益于梯度正则化的问题尚未得到充分关注。本研究提出梯度正则化自然梯度法(GRNG),这是一种可扩展的二阶优化器族,它将显式梯度正则化与自然梯度更新相结合。我们的框架提供两种互补算法:一种基于频率学派的变体,通过结构化近似避免显式求逆费舍尔信息矩阵(FIM);另一种基于贝叶斯学派的变体,采用正则化卡尔曼公式,完全无需进行FIM求逆。我们建立了GRNG的收敛性保证,证明梯度正则化能提升稳定性并确保收敛至全局最小值。实证研究表明,与一阶方法(SGD、AdamW)及二阶基线方法(K-FAC、Sophia)相比,GRNG在视觉与语言基准测试中均能持续提升优化速度与泛化性能。我们的发现表明,梯度正则化作为一种原理清晰且实用的工具,能够释放自然梯度方法在大规模深度学习中的鲁棒性潜力。