Training deep neural networks is a structured optimization problem, because the parameters are naturally represented by matrices and tensors rather than by vectors. Under this structural representation, it has been widely observed that gradients are low-rank and Hessians are approximately block diagonal. These structured properties are crucial for designing efficient optimization algorithms, but are not utilized by many current popular optimizers like Adam. In this paper, we present a novel optimization algorithm ASGO that capitalizes on these properties by employing a preconditioner that is adaptively updated using structured gradients. By a fine-grained theoretical analysis, ASGO is proven to achieve superior convergence rates compared to existing structured gradient methods. Based on this convergence theory, we further demonstrate that ASGO can benefit from low-rank gradients and block diagonal Hessians. We also discuss practical modifications of ASGO and empirically verify ASGO's effectiveness on language model tasks. Code is available at https://github.com/infinity-stars/ASGO.
翻译:深度神经网络的训练是一个结构化优化问题,因为其参数天然以矩阵和张量而非向量的形式表示。在这种结构化表示下,梯度具有低秩性、海森矩阵近似块对角化的特性已被广泛观察到。这些结构化性质对于设计高效优化算法至关重要,但当前许多主流优化器(如Adam)并未加以利用。本文提出一种新颖的优化算法ASGO,该算法通过采用基于结构化梯度自适应更新的预条件子,充分利用了这些性质。通过精细的理论分析,ASGO被证明相较于现有结构化梯度方法具有更优的收敛速率。基于该收敛理论,我们进一步论证ASGO能够从低秩梯度和块对角海森矩阵中获益。文中还讨论了ASGO的实用改进方案,并在语言模型任务上实证验证了ASGO的有效性。代码发布于https://github.com/infinity-stars/ASGO。